| [ |
| { |
| "chunk_id": "00a7a0d8-021c-4a24-aad6-f77237171c1f", |
| "text": "Non-Negative Networks Against Adversarial Attacks William Fleshman,1 Edward Raff,1,2 Jared Sylvester,1,2 Steven Forsyth,3 Mark McLean1\n1Laboratory for Physical Sciences, 2Booz Allen Hamilton, 3Nvidia\n{william.fleshman, edraff, jared, mrmclea}@lps.umd.edu, sforsyth@nvidia.com Perturbing an arbitrary byte of an executable file will\nmost likely change the functionality of the file or prevent it\nAdversarial attacks against neural networks are a problem of\nfrom executing entirely. This property is useful for defending2019 considerable importance, for which effective defenses are not\nagainst an adversarial attack, as a malware author needs to yet readily available. We make progress toward this problem", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 0, |
| "total_chunks": 45, |
| "char_count": 704, |
| "word_count": 93, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e6bc079b-abcc-41b2-adff-73a6ba5fcc81", |
| "text": "Kreuk et al. (2018) were able to bypass these limitationsJan byto improveshowing resistancethat non-negativein specificweightscenarios.constraintsIn particular,can be usedwe evade detection with a working malicious file.\nshow that they can provide an effective defense for binary clas- by applying gradient-based attacks to create perturbations\n3 sification problems with asymmetric cost, such as malware or which were restricted to bytes located in unused sections\nspam detection. We also show the potential for non-negativity of malicious executable files. The adversarial examples reto be helpful to non-binary problems by applying it to image mained just as malicious, but the classifier was fooled by the\nclassification. introduction of overwhelmingly benign yet unused sections\nof the file. This is possible because the adversary controls the\n1 Introduction input, and the EXE format allows unused sections. Because\nof the complications and obfuscations that are available to Recently, there has been an increased research effort in exmalware authors, it is not necessarily possible to tell that a[stat.ML] ploring adversarial examples which fool machine learning\nsection is unused, even if its contents appear random. This is classifiers (Goodfellow, Shlens, and Szegedy, 2015; Kurakin,\nan additive only adversary — i.e., the attacker can only add Goodfellow, and Bengio, 2017a; Szegedy et al., 2014; Yuan\nfeatures — which has been widely used and will be the focus et al., 2017). The majority of the existing research focuses on\nof our study. the image domain, where an example is generated by making\nsmall perturbations to input pixels in order to make a large An analogy to the image domain would be an attacker that\nchange in the distribution of predicted class probabilities. We could create new pixels which represent the desired class and\nare particularly interested in adversarial attacks for malware put them outside of the cropping box of the image, such that\ndetection, which is the task of determining if a file is benign they would be in the digital file, but never be seen by a human\nor malicious. This involves a real-life adversary (the malware observer. This contrasts with a standard adversarial attack on\nauthor) who is attempting to subvert detection tools, such images, since the attacker is typically limited to changing the\nas anti-virus programs. With machine learning approaches values of existing pixels in the image rather than introducing\nto malware detection becoming more prevalent (Raff et al., new pixels entirely.\n2018; Pascanu et al., 2015; Saxe and Berlin, 2015; Sahs and Given these unique characteristics and costs, we note that\nKhan, 2012), this is an area that urgently requires solutions the malware case is one where we care only about targeted\nwhen not under attack are an acceptable cost for remediating troduce an approach to tackle targeted adversarial attacks by\ntargeted attacks. In this scenario, the effective accuracy of exploiting non-negative learning constraints. We will highthe system would be the accuracy under attack, which will light related work in section 2. In section 3 we will detail our\nbe at or near zero without proper defenses. motivation for non-negative learning for malware, as well as\nFor example, Raff et al. (2018) trained a convolutional how we generalize its use to multi-class problems like image\nneural network called MalConv to distinguish between be- classifiers. The attack scenario and experiments on malware,\nnign and malicious Windows executable files.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 1, |
| "total_chunks": 45, |
| "char_count": 3543, |
| "word_count": 549, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f9346df4-3ca0-4f2f-abf1-c71d6a21cc98", |
| "text": "When working spam, and image domains will be detailed in section 4. In\nwith images, any pixel can be arbitrarily altered, but this section 5 we will demonstrate how our approach reduces\nfreedom does not carry over to the malware case. The ex- evasions to almost 0% for malware and exactly 0% spam\necutable format follows stricter rules which constrain the detection. On images we show improvements to robustness\noptions available to the attacker (Kreuk et al., 2018; Russu against confident adversarial attacks against images, showing\net al., 2016; Grosse et al., 2016; Suciu, Coull, and Johns, that there is potential for non-negativity to aid in non-binary\nproblems. We will end with our conclusions in section 6.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 2, |
| "total_chunks": 45, |
| "char_count": 715, |
| "word_count": 117, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0eacea93-6af3-49a9-852c-6c92d3b2e150", |
| "text": "2 Related Work 3 Isolating Classes with Non-Negative\nWeight Constraints\nThe issues of targeted adversarial binary classification prob- We will start by building an intuition on how logistic regreslems, as well as the additive adversary, was first brought sion with non-negative weight constraints assigns predictive\nup by Dalvi et al. (2004), who noted its importance in a power to only features indicative of the positive (+) class\nnumber of domains like fraud detection, counter terrorism, while ignoring those associated with the negative (−) class.\nsurveillance, and others. There have been several attempts Let C(·) be a trained logistic regression binary classifier\nat creating machine learning classifiers which can defend of the form C(x) = sign wTx + b , where w is the vector\nagainst such adversarial examples. Yuan et al. (2017) provide of non-negative learned coefficients of C(·), x is a vector of\na thorough survey of both attacks and defenses specifically boolean features for a given sample, and b is a scalar bias. The\nfor deep learning systems. Some of these attacks will be used decision boundary of C(·) exists where wTx + b = 0, and\nto compare the robustness of our technique to prior methods. because wTx ≥0 ∀x, the bias b must be strictly negative in\norder for C(·) to have the capacity to assign samples to both\nIn our case we are learning against a real life adversary\nclasses. The decision function can then be rewritten as:\nin a binary classification task, similar to the initial work in\nthis space on evading spam filters (Lowd and Meek, 2005b; (+) wTx ≥|b| C(x) = (1)\nDalvi et al., 2004; Lowd and Meek, 2005a). Our malware (−) wTx < |b|\ncase gives the defender a slight comparative advantage in\nconstraining the attack to produce a working binary, where Because w is non-negative, the presence of any feature\nspam authors can insert more arbitrary content. xi ∈x can only increase the result of the dot product, thus\npushing the classification toward (+). Weights associated\nPrior works have looked at similar weight constraint ap- to features that are predictive of class (−) will therefore\nproaches to adversarial robustness. Kołcz and Teo (2009) uses be pushed toward 0 during training. When no features are\na technique to keep the distribution of learned weights asso- present (x = ⃗0) the model defaults to a classification of\nciated with features as even as possible during training. By\n(−) due to the negative bias b. Unless a sufficient number of\npreventing any one feature from becoming overwhelmingly\nfeatures predictive of class (+) are present in the sample, the\npredictive, they force the adversary to manipulate many feadecision will remain unchanged. A classifier trained in this\ntures in order to cause a misclassification. Similarly, Grosse et\nway will use features indicative of the (−) class to set the\nal. (2016) tested a suite of feature reduction methods specifbias term, but will not allow those features to participate in\nically in the malware domain (Grosse et al., 2016).", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 3, |
| "total_chunks": 45, |
| "char_count": 3026, |
| "word_count": 504, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "299376cc-d3ae-4e65-87e3-5ea832534b62", |
| "text": "First,\nclassification at test time. The same logic follows for logistic\nthey used the mutual information between features and the\nregression with non-boolean features if the features are also\ntarget class in order to limit the representation of each file\nnon-negative or scaled to be non-negative before training.\nto those features. Like Kołcz and Teo, they created an alterGiven a problem with asymmetric misclassification goals,\nnative feature selection method to limit training to features\nwe can leverage this behavior to build a defense against adwhich carried near equal importance. They found both of\nversarial attacks. For malware detection, the malware author\nthese techniques to be ineffective.\nwishes to avoid detection as malware (+), and instead induce\nOur approach is also a feature reduction technique. The a false detection as benign (−). However, there is no desire\ndifference is that we train on all features, but only retain the for the author of a benign program to make their applicacapacity to distinguish a reduced number of features at test tions be detected as malicious. Thus, if we model malware\ntime — namely, only those indicative of the positive class. as the positive class with non-negative weights, nothing can\nTraining on all features allows the model to automatically be added to the file to make it seem more benign to the\ndetermine which are important for the target class and utilizes classifier C(·). Because executable programs must maintain\nthe other features to accurately set a threshold, represented functionality, the malware author can not trivially remove\nby the bias term, for determining when a requisite quantity of content to reduce the malicious score either. This leaves the\nfeatures are present for assigning samples to the target class. attacker with no recourse but to re-write their application, or\nperform more non-trivial acts such as packing to obscure inChorowski and Zurada (2015) used non-negative weight\nformation. Such obfuscations can then be remediated through\nconstraints in order to train more interpretable neural netexisting approaches like dynamic analysis (Ugarte-Pedrero\nworks. They found that the constraints caused the neurons to\net al., 2016; Chistyakov et al., 2017).\nisolate features in meaningful ways. We build on this techNotably, this method also applies to neural networks with\nnique in order to isolate features while also preventing our\na sigmoid output neuron as long as the input to the final\nmodels from using the features predictive of the negative\nlayer and the final layer's weights are constrained to be nonclass.\nnegative. The output layer of such a network is identical to\nGoodfellow, Shlens, and Szegedy (2015) used RBF net- our logistic regression example. The cumulative operation\nworks to show that low capacity models can be robust to of the intermediate layers φ(·) can be interpreted as a readversarial perturbations but found they lack the ability to representation of the features before applying the logistic\ngeneralize.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 4, |
| "total_chunks": 45, |
| "char_count": 3025, |
| "word_count": 474, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "292fdb31-241f-4374-baa4-d1e87c995452", |
| "text": "With our methods we find we are able to achieve regression such as C(x) = sign wTφ(x) + b . We will\ngeneralization while also producing low confidence predic- denote when a model is trained in a non-negative fashion by\ntions during targeted attacks. appending \"+ \" to its name. The ReLU function is a good choice for intermediate layers apparent probability of class i without having impacted the\nas it maintains the required non-negative representation and model's response to class i. Phrased analogously as an image\nis already found in most modern neural networks. classification problem, adversaries don't need to remove the\nFor building intuition, in Figure 1 we provide an example amount of \"cat\" in a photo to induce a decision of \"potato,\"\nof how this works for neural networks using MNIST. To fool but only increase the amount of \"potato.\"\nthe network into predicting the positive class (one) as the In addition Chorowski and Zurada (2015) proved that a\nnegative class (zero), the adversary must now make larger non-negative network trained with softmax activation can\nremovals of content — to the point that the non-negative be transformed into an equivalent unconstrained network.\nattack is no longer a realistic input. This means there is little reason to expect our non-negative\napproach to provide benefit if we stay with the softmax activation, as it has an equivalent unconstrained form and should\nbe equally susceptible to all adversarial attacks. As such we\nmust move away from softmax to get the benefits of our\nnon-negative approach in a multi-class scenario. Instead we can look at the classification problem in a onevs-all fashion by replacing the softmax activation over K\nclasses with K independent classifications trained with the\nbinary cross-entropy loss and using the sigmoid activation\nFigure 1: Left: Original Image; Middle: Gradient attack on σ(z) = 1/(1 + exp(−z)). Final probabilities after training\nLeNet; Right: Gradient attack on non-negative LeNet+. The are obtained by normalizing the sigmoid responses to sum to\nattack on the standard model was able to add pixel intensity in one. We find that this strategy combined with non-negative\na round, zero-shaped area to fool the classifier into thinking learning provides some robustness against an adversary prothis was a zero. The attack on the constrained model was ducing targeted high-confidence attacks (e.g., the network is\nforced to remove pixel intensity from the one rather than 99% sure the cat is a potato). The one-vs-all component make\nadding in new values elsewhere. it such that increasing the confidence of a new class eventually requires reducing the confidence of the original class. It should be noted that constraining a model in this way The non-negativity increases the difficulty of this removal\ndoes reduce the amount of information available for discrimi- step, resulting in destructive changes to the image.\nnating samples at inference time, and a drop in classification We make two important notes on how we apply nonaccuracy is likely to occur for most problems. The trade off negative training for image classification.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 5, |
| "total_chunks": 45, |
| "char_count": 3132, |
| "word_count": 503, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cc21b1f4-602d-40a9-ae1f-b7cd7e3dfa85", |
| "text": "First, we prebetween adversarial robustness and performance should be train the network using the standard softmax activation, and\nanalyzed for the specific domain and use case. then re-train the weights with our one-vs-all style and nonA practical benefit to our approach is that it is simple to negative constraints on the final fully connected layers. In the general case, on can simply use gradient so we find only a small difference in accuracy between redecent with a projection step that clips negative values to sults, where training non-negative networks from scratch\nzero after each step update. We implemented our approach often has reduced accuracy.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 6, |
| "total_chunks": 45, |
| "char_count": 661, |
| "word_count": 104, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ef7d5a37-ed7b-433f-9b19-0edaf22918d1", |
| "text": "Second, we continue to use batch\nin Keras (Chollet, 2015) by simply adding the \"NonNeg\" normalization without constraints. This is because batch norconstraint to each layer in the model.1 malization can be rolled into the bias term and as a re-scaling\nof the weights, and so does not break the non-negative con-\n3.1 Non-Negativity and Multi-Class Classification straint in any way. We find its positive impact on convergence\nThe primary focus of our work is on binary tasks like mal- greater when training with the non-negative constraints.\nware and spam detection, it is also worth asking if it can be\napplied to multi-class problems. In this work we show that 4 Experimental Methodology\nnon-negativity can still have some benefit in this scenario, Having defined the mechanism by which we will defend\nbut we find it necessary to re-phrase how such tasks are han- against targeted adversarial attacks, we will investigate its\ndled. Normally, one would use the softmax (softmax(v)i = application to two malware detection models, one spam detecexp(vi)/Pnj=1 exp(vj)) on the un-normalized probabilities tion task, and four image classification tasks. We will spend\nv given by the final layer.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 7, |
| "total_chunks": 45, |
| "char_count": 1190, |
| "word_count": 190, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e30cbcb4-bd70-43ee-bdd2-c0de3918d5ad", |
| "text": "The probability of a class i is more time introducing the malware attacks, as readers may\nthen taken as softmax(v)i. However we find that the softmax not have as much experience with this domain.\nactivation makes it easier to attack networks. For malware, we will look at MalConv (Raff et al., 2018), a\nTake the non attacked activation pattern v, where vi > vj recently proposed neural network that learns from raw bytes.\n∀j ̸= i. Now consider the new activation pattern ˆv, which is We will also consider an N-Gram based model (Raff et al.,\nproduced by an adversarially perturbed input with the goal of 2016). Both of these techniques are applied to the raw bytes\ninducing a prediction as class q instead of i. Then it is neces- of a file. We use the same 2,000,000 training and 80,000\nsary to force ˆvq > ˆvi. Yet even if ˆvi ≈vi, the probability of testing datums as used in Raff and Nicholas (2017).\nclass i can be made arbitrarily low by continuing to maximize Following recommendations by Biggio, Fumera, and Roli\nthe response of ˆvq.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 8, |
| "total_chunks": 45, |
| "char_count": 1040, |
| "word_count": 186, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "86d3aa5b-946d-4e17-911a-54598f9f819b", |
| "text": "This means we are able to diminish the (2014) we will specify the threat model under which we\nperform our evaluations. In all cases, our threat model will\n1https://keras.io/constraints/ assume a white-box adversary that has full knowledge of the models, their weights, and the training data. The possibility of evading non-negativity on dynamic\nclassification problems, we assume in the threat model that features requires addressing the cat-and-mouse game around\nour adversary can only add new features to the model (i.e., in VM detection, stealthy malware, and the nature of features\nthe feature vector space they can change a zero valued feature used. This discussion is important, but beyond the current amto non-zero, but can not alter an already non-zero value). bit, which we limit to static analysis. We are interested here\nWe recognize that this threat model does not encompass in whether or not non-negativity has benefit to the additive\nall possible adversaries, but note that it is one of the most adversary, not more sophisticated ones.\ncommonly used adversarial models spanning many domains. The \"Good Word\" attack on spam messages is itself an ex- Spam Threat Model Specifics For spam detection the adample of this threat model's action space, and one of the versary will be restrained to the insertion of new content into\ninitial works in adversarial learning noted its wide applica- an existing spam message.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 9, |
| "total_chunks": 45, |
| "char_count": 1425, |
| "word_count": 228, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b53b33b2-b4c9-43b6-ac7e-c922d7570882", |
| "text": "This is because we are interested\nbility (Lowd and Meek, 2005a). In a recent survey, Maiorca, in the lower-effort \"good word\" attack scenario. Despite beBiggio, and Giacinto (2018) found that 9 out of 10 works ing less sophisticated, it remains effective today.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 10, |
| "total_chunks": 45, |
| "char_count": 261, |
| "word_count": 42, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a2eb34cb-7d26-4883-9146-260baf91bb17", |
| "text": "Tackling\nin evading malicious PDF detectors used the additive only wholly changed and newly crafted spam messages is beyond\nthreat model, and these additive adversaries succeeded in our current purview.\nboth white-box and black-box attacks. Demontis et al. (2017)\nconsidered both the additive only adversary, as well as one\nwhich could add or remove features, as applied to android Image Threat model Specifics Image classification does\nmalware detection. On their Android data, they demonstrate not exhibit the asymmetric error costs that malware and spam\na learning approach which provides bounds on adversary suc- do. The purpose of studying it is to determine if our noncess under both adversary action models, making it robust but negativity can have benefit to multi-class problems.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 11, |
| "total_chunks": 45, |
| "char_count": 788, |
| "word_count": 121, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8c609977-b2d4-4eb9-8e6d-5ce536b43aa6", |
| "text": "It is\nstill vulnerable. Under the white-box additive attack scenario, intuitive that the answer would be \"no,\" but we nevertheless\ntheir Secure-SVM detection rate drops from 95% on normal find that some limited benefit exists.\ntest data down to 60% when attacked. Finally, for the case In this threat model, there is no \"adding\" or \"removing\"\nof Windows PE data, three different works have attacked the features, due to the intrinsic nature of images. As such we\nMalConv model using the additive adversary (Kreuk et al., consider the L1 distance between a original image x, and\n2018; Kolosnjaji et al., 2018; Suciu, Coull, and Johns, 2018). its adversarially perturbed counterpart ˆx. The adversary may\narbitrarily alter any pixels, so long as ∥x −ˆx∥1 < ϵ, where ϵ The additive threat model makes sense to study, as it is\nis a problem-dependent maximum distance.easier to implement for the adversary and currently successful in practice. For this reason, it makes little sense for the\n4.1 Attacking MalConvadversary to consider a more powerful threat model (e.g.,\nadding and removing features) which would increase their MalConv is the primary focus of our interest, as gradient\ncosts and effort, when the simpler and cheaper alternative based attacks can not naively be applied to its architecture.\nworks. We will show in section 5 that while not perfect, our Only recently have attacks been proposed (Kolosnjaji et\nnon-negative defense is the first to demonstrably thwart the al., 2018; Kreuk et al., 2018), and we will show that nonadditive adversary while still obtaining reasonable accuracies. negativity allows us to thwart these adversaries. In MalConv,\nThis forces a potential adversary to \"step up\" to a more pow- raw bytes of an executable are passed through a learned\nerful model, which increases their effort and cost. We contend embedding layer which acts as a lookup table to transform\nthis is of intrinsic value eo ipso. Below we will review addi- each byte into an 8-dimensional vector of real values. This\ntional details regarding the threat model for each data-type representation is then passed through a 1-dimensional gated\nwe consider (Windows PE, emails, and images). This is fol- convolution, global max pooling, and then a fully connected\nlowed by specifics on how the attacks are carried out for each layer with sigmoid output. To handle varying file sizes, all\nclassification algorithm as the details are different in all cases sequences of bytes are padded to a fixed length of 2,100,000\ndue to model and problem diversity. using a special \"End of File\" value (256) from outside of the\nnormal range of bytes (0–255). The raw bytes are both discrete and non-ordinal, which\nWindows PE Threat Model Specifics For PE malware we prevents gradient based attacks from manipulating them diuse the appending of an unused section as the attack vector for rectly. Kreuk et al. (2018) (and independently Kolosnjaji et\ntechnical simplicity.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 12, |
| "total_chunks": 45, |
| "char_count": 2955, |
| "word_count": 480, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f95bc30d-a364-4f50-b80a-dedb4c66f7b2", |
| "text": "The adversary will be allowed to append al. (2018)) devised a clever way of modifying gradient based\nany desired number of bytes into an added and unused section attacks to work on EXEs, even with a non-differentiable emof the binary file, until no change in the evasion rate occurs. bedding layer, and we will briefly recap their approach. This\nOur approach should still work if the adversary performed is done by performing the gradient search of an adversarinsertions between functions rather than at the end of the file. ial example in the 8-dimensional vector space produced by\nReal malware authors often employ packing to obfuscate the embedding layer. A perturbed vector is then mapped to\nthe entire binary. This work does not consider defense against the byte which produces the nearest neighbor in the embedpacking obfuscation, except to note that the common de- ding space.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 13, |
| "total_chunks": 45, |
| "char_count": 883, |
| "word_count": 145, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9084d847-60cf-42eb-9421-20f267029efa", |
| "text": "Keeping with the notation of Kreuk et al., let\nfensive technique is to employ dynamic analysis. Our non- M ∈Rn×d be the lookup table from the embedding layer\nnegative approach can be applied to the features derived from such that M : X →Z where X is the set of n possible\ndynamic analysis as well, but beyond the scope of this pa- bytes and Z ⊆Rd is the embedding space. sequence of bytes x = (x0, x1, . . . , xL), we generate a se- tures. We also remove the lasso regularization from N-Gram+\nquence of vectors z = (M[x0], M[x1], . . . , M[xL]) were as the constraints are already performing feature selection by\nM[xi] indicates row xi of M. Now we generate a new vec- pushing the weights of benign features to zero.\ntor ez = z + δ where δ is a perturbation generated from\nan adversarial attack. We map each element ezi ∈ez back to 4.3 Spam Filtering\nbyte space by finding the nearest neighbor of ezi among the As mentioned in the previous section, Lowd and Meek\nrows of M. By applying this technique to only specific safe (2005b) created \"Good Word\" attacks to successfully evade\nregions of a binary, the execution of gradient based attacks spam filters without access to the model. These attacks apagainst MalConv are possible without breaking the binary. pend common words from normal emails into spam in order\nTo ensure that a \"safe\" area exists, they append an unused to overwhelm the spam filter into thinking the email is legitisection to the binary. The larger this appended section is, the mate.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 14, |
| "total_chunks": 45, |
| "char_count": 1504, |
| "word_count": 274, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "513648f9-aee8-46ea-a13e-b0f35b54f947", |
| "text": "In their seminal work, they noted that it was unrealistic\nmore space the adversary has to develop a strong enough to assume that an adversary would have access to the spam\nsignal of \"benign-ness\" to fool the algorithm. filter, and would thus need to somehow guess at which words\nWe replicate the attack done by Kreuk et al. (2018) which are good words, or to somehow query the spam filter to steal\nuses the fast gradient sign method (FGSM) (Goodfellow, information about which words are good. Others have simply\nShlens, and Szegedy, 2015) to generate an adversarial ex- used the most frequent words from the ham messages as a\nample in the embedding space. We find our ez by solving: proxy to good word selection that an adversary could repliez = z + ϵ · sign ∇zeℓ(z, y; θ) , where eℓ(·) is the loss func- cateInge,(Jorgensen,2007). We Zhou,take theandmoreInge,pessimistic2008; Zhou,approachJorgensen,that andthetion of our model parameterized by θ and z is the embedded\nadversary has full access to our model, and can simply select\nrepresentation of some input with label y. The new ez is then the words that have the largest negative coefficients (i.e. themapped back into byte space using the method previously dismost good-looking words) for their attack. This is the same\ncussed.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 15, |
| "total_chunks": 45, |
| "char_count": 1283, |
| "word_count": 215, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "be52c2fc-bfb6-40cc-8a33-c0edeab8938f", |
| "text": "We performed the attack on 1000 randomly selected\nassumption we make in attacking the n-gram model.\nmalicious files, varying the size of the appended section used\nBy showing that our non-negative learning approach elimito generate the adversarial examples.\nnates the possibility of good word attacks in this pessimistic\nFor MalConv, adding an unused section allows an attacker\ncase, we intrinsically cover all weaker cases of an adversary's\nto add benign features which overwhelm the classification.\nability. We note as well that Lowd and Meek speculated theOur hypothesis is that MalConv+ should be immune to the\nonly effective solution to stop the Good Word attack would\nattack since it only learns to look for maliciousness, defaultbe to to periodically re-train the model. By eliminating the\ning to a decision of benign when no other evidence is present.\npossibility of performing Good Word attacks, we increase\nWe also note that this corresponds well with how anti-virus\nthe cost to operate for the adversary, as they must now exert\nprograms prefer to have lower false positive rates to avoid\nmore effort into crafting significantly novel spam to avoid\ninterfering with users' applications.\ndetection. By eliminating the lowest-effort approach the adversary can take, we remediate a sub-component of the spam\n4.2 Attacking N-Gram\nproblem, but not spam as a whole. The N-Gram model was trained using lasso regularized lo- We train two logistic regression models on the TREC 2006\ngistic regression on the top million most frequent 6-byte and 2007 Spam Corpora.2 The 2006 dataset contains 37,822\nn-grams found in our 2 million file training set. The 6-byte emails with 24,912 being spam. The 2007 dataset contains\ngrams are used as boolean features, where a 1 represents the 75,419 messages with 50,199 of them being spam. We pern-gram's presence in a file.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 16, |
| "total_chunks": 45, |
| "char_count": 1859, |
| "word_count": 298, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fbfc52da-2ed1-4b68-a58d-1ce4e9c9c97c", |
| "text": "Lasso performed feature selection formed very little text preprocessing and represented each\nby assigning a weight of 0 to most of the n-grams. The result- email as a vector of boolean features corresponding to the top\ning model had non-zero weights assigned to approximately 10,000 most common words in the corpus. The first model is\n67,000 of the features. trained with lasso regularization in a traditional manner. The\nWe devise a white-box attack similar to the attack Kreuk second model is trained with non-negative constraints on the\net al. (2018) used against MalConv in that we inject benign coefficients in order to isolate only the features predictive of\nbytes into an unused section appended to malicious files. spam during inference. Specifically, we take the most benign 6-grams by sorting\nthem based on their learned logistic regression coefficients. 4.4 Targeted Attacks on Image Classification\nWe add benign 6-grams one at a time to the malicious file For our image classification experiments we follow the recuntil a misclassification occurs. This ends up being the same ommendations of Carlini and Wagner (2017a) for evaluating\nkind of approach Lowd and Meek (2005b) used to perform an adversarial defense. In addition to the FGSM attack, we\n\"Good Word\" attacks on spam filters, except we assume the will also use a stronger iterated gradient attack. Specifically\nadversary has perfect knowledge of the model. The simplicity we use the Iterated Gradient Attack (IGA) introduced in (Kuof the N-Gram model allows us to do this targeted attack, rakin, Goodfellow, and Bengio, 2017b), using Keras for our\nand specifically look at the evasion rate as a function of the models and Foolbox (Rauber, Brendel, and Bethge, 2017)\nnumber of inserted features. for the attack implementations. We evaluated the confidences\nTo prevent these attacks, we train N-Gram+ using non- at which such attacks can succeed against the standard and\nnegative weight constraints on the same data. This model is\nprevented from assigning negative weights to any of the fea- 2See https://trec.nist.gov/data/spam.html our non-negative models on MNIST, CIFAR 10 and 100, and these cases since there is no valid gradient for this output. A persistent adversary could still create a successful adverWe note explicitly that the IGA attack is not the most sarial example by replacing the sigmoid output with a linear\npoweruful adversary we could use. Other attacks like Pro- activation function before running the attack.\njected Gradient Decent (PGD) and the C&W attack(Carlini\nand Wagner, 2017b) are more successfully, and defeat our Evasion Rate as Size of Appended Section Increases\nmulti-class generalization of non-negative learning. We study\nIGA to show that there is some benefit, but that overall the\nmulti-class case is a weakness of our approach.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 17, |
| "total_chunks": 45, |
| "char_count": 2836, |
| "word_count": 448, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5806328d-4fbe-4849-bfc6-1e8a874822e7", |
| "text": "We find the\nresults interesting and informative because our prior belief\nwould have been that non-negativity would produce no bene-\n80fit to the defender at all, which is not the case. (%)\nWe are specifically interested in defending against an adversary creating a high confidence targeted attack (e.g., a Rate 60 MalConv\nlabel was previously classified as \"cat\", but now is classified MalConv+\nas \"potato\" with a probability of 99%). As such we will look 40at the evasion rate for an adversary altering an image to other Evasion\nclasses over a range of target probabilities p. The goal is to 20\nsee the non-negative trained network have a lower evasion\nrate, especially for p ≥90%. 0\nFor MNIST and CIFAR 10, since there are only 10 classes,\nwe calculate the evasion rate at a certain target probability 0 20 40 60 80 100\np as the average rate at which an adversary can successfully Appended Section Size as Percent of File\nalter the networks prediction to every other class and reach a\nminimum probability p. For CIFAR 100 and Tiny ImageNet,\nthe larger number of classes prohibits this exhaustive pairwise Evasion Rate as Top Benign N-Grams are Added\ncomparison. Instead we evaluate the evasion rate against a\nrandomly selected alternative class. On MNIST, CIFAR 10, and CIFAR 100, due to their small\nimage sizes (≤32×32), we found that adversarial attacks 100\nwould often \"succeed\" by changing the image to an unrecognizable degree. For this reason we set a threshold of 60 on the (%) 80\nL1 distance between the original image and the adversarial\nmodification. If the adversarial modified image exceeded this Rate 60\nthreshold, we counted the attack as a failure. This threshold\nN-Gram\nwas determined by examining several images; more informa- 40 N-Gram+\ntion can be found in the appendix.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 18, |
| "total_chunks": 45, |
| "char_count": 1791, |
| "word_count": 302, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "28d1a237-f155-4007-829a-c792bd07c798", |
| "text": "For Tiny ImageNet this Evasion\nissue was not observed, and Foolbox's default threshold was 20\nused. 5 Results\nHaving reviewed the method by which we will fight targeted 0 10 20 30 40\nadversarial attacks, and how the malware attacks will be ap- Number of Benign N-Grams Added\nplied, we will now present the results of our non-negative\nnetworks. First we will review those related to malware and Figure 2: Evasion rate (y-axis) for MalConv and N-Gram\nspam detection, showing that non-negative learning effec- based models. Top figure shows MalConv evasion as the\ntively neutralizes evasion by a malware author. Then we will appended section size increases, and bottom figure shows\nshow how non-negative learning can improve robustness on the N-Gram evasion as the number of benign n-grams are\nseveral image classification benchmarks. added. The number of files that evade increase as the size of\nthe appended section increases. The evasion rates remained\n5.1 Malware Detection fixed for all section sizes greater than 25% of the file size. Using the method outlined in section 4, Kreuk et al. (2018)\nreported a 100% evasion rate of their model. As shown in Our non-negative learning provides an effective defense,\nFigure 2, our replication of the attack yielded similar results with only 0.6% of files able to evade MalConv+. Theoretifor MalConv, which was evaded successfully for 95.4% of cally we would expect an evasion rate of 0.0%.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 19, |
| "total_chunks": 45, |
| "char_count": 1434, |
| "word_count": 234, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "97e5e09f-83d7-4b1a-b2fb-edc3bc5b4fe8", |
| "text": "Investigating\nthe files. The other 4.6% of files were all previously classi- these successful evasions uncovered a hidden weakness in\nfied as malware with a sigmoid activation of 1.0 at machine the MalConv architecture. We found that both MalConv and\nprecision due to floating-point rounding. The attack fails for MalConv+ learned to give a small amount of malicious preTable 1: Out of sample performance on malware detection in\nthe absence of attack. Classifier Accuracy % Precision Recall AUC %\n0.8 MalConv 94.1 0.913 0.972 98.1\nMalConv+ 89.4 0.908 0.888 95.3 Rate\nN-Gram 95.5 0.926 0.987 99.6 0.6\nN-Gram+ 91.1 0.915 0.885 95.5 Positive 0.4\ndictive power to the special End of File (EOF) padding value. True MalConv\n0.2 MalConv+This is most likely a byproduct of the average malicious file\nsize being less than the average benign file size in our training N-Gram\nset, which causes the supposedly neutral EOF value itself 0 N-Gram+\nto be seen as an indicator of maliciousness. The process of\nadding an unused file section necessarily reduces the amount 0 0.2 0.4 0.6 0.8 1\nof EOF padding tokens given to the network, as the file is False Positive Rate\nincreased in size (pushing it closer to the 2.1MB processing\nlimit) the new section replaces the EOF tokens. Replacing the Figure 3: ROC curves for MalConv and N-Gram malware\nslightly malicious EOF tokens with benign content reduces classifiers, with and without non-negative restraints, in the\nthe network's confidence in the file being malicious. absence of attack. The 0.6% of files that evaded MalConv+ only did so when\nfiles were small, and the appended section ended up comprising 50% of the resulting binary. The slight maliciousness positive trade-off, we also report the ROC curves for the\nfrom the EOF was the needed feature to push the network MalConv, MalConv+, N-Gram, and N-Gram+ classifiers in\ninto a decision of \"malicious.\" However, the removal of EOFs Figure 3.\nby the unused section removed this slight signal, and pushed While our non-negative approach has paid a penalty in acthe decision back to \"benign.\" If we instead replace the bytes curacy, we note that we can see this has predominately come\nof the unused section with random bytes from the uniform from a reduction in recall. Because features can only indicate\ndistribution, the files still evade detection. This means the maliciousness, some malicious binaries are labeled as benign\nevasion is not a function of the attack itself, but the modifica- due to a lack of information. This scenario corresponds to\ntion of the binary that removes EOF tokens. A simple fix to the preferred deployment scenario of security products in\nthis padding issue is to force the row of the embedding table general, which is to have a lower false positive rate (benign\ncorresponding to the special byte to be the zero vector during files marked malicious) at the expense of false negatives (matraining. This would prevent the EOF token from providing licious files marked benign) (Ferrand and Filiol, 2016; Zhou\nany predictive power during inference. and Inge, 2008; Yih, Goodman, and Hulten, 2006).", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 20, |
| "total_chunks": 45, |
| "char_count": 3113, |
| "word_count": 513, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9ff5d9ba-f401-4f6f-8918-9bbe86bfcc3f", |
| "text": "As such\nWe observed similar results for the N-Gram model. The the cost of non-negativity in this scenario is well aligned\nevasion rate increases rapidly as benign features are added with its intended use case, making the cost palatable and the\nto the malicious files. We found that appending the top 41 trade-off especially effective.\nmost benign features resulted in a 100% evasion rate. This To us it seems reasonable to accept a small loss in accuracy\nattack is completely mitigated by N-Gram+ since none of its when not under attack in exchange for a large increase in\nfeatures have negative weights supporting the benign class. accuracy when under attack. An interesting solution could be\nThe only way to alter the classification would be to remove employing a pair of models, one constrained and one not, in\nmalicious n-grams from the files. Our results for both models addition to some heuristic indicating an adversarial attack is\nare depicted in Figure 2. underway. The unconstrained model would generate labels\nduring normal operations and fail over to the constrained\nAccuracy vs Defense model during attack. The confidence of the constrained model\nThe only drawback of this approach is the possible reduc- could be used as this switching heuristic as we empirically\ntion in overall accuracy. Limiting the available information observe that the confidences during attack are much lower.\nat inference time will likely reduce performance for most Those who work in an industry environment and produce\nclassification tasks. Alas, many security related applications commercial grade AV products may object that our accuracy\nexist because adversaries are present in the domain. We have numbers do not reflect the same levels obtained today. We\nshown that under attack our normal classifiers completely fail remind these readers that we do not have access to the same\n— therefore a reduction in overall accuracy may be well worth amount of data or access to the resources necessary to prothe increase in model defensibility. Table 1 shows metrics duce training corpora of similar quality, and so it should not\nfrom our models under normal conditions for comparison. be expected that we would obtain the same levels of accuSince different members of the security community have racy as production systems. The purpose of this work is to\ndifferent desires with respect to the true positive vs. false show that a large class of models that have been used and attacked in prior works can be successfully defended against image classification.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 21, |
| "total_chunks": 45, |
| "char_count": 2543, |
| "word_count": 412, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "449d2fcb-f48e-494c-8e71-3f39c10a4835", |
| "text": "In particular, we find it is possible to\nthis common threat model. This comes at minor price, as just leverage non-negative learning as discussed in subsection 3.1\ndiscussed, but this is the first technique to show that it can be to provide robustness against confident targeted attacks. That\nwholly protective. is to say if the predicted class is yi, the adversary wants to\ntrick the model into predicting class yj, j ̸= i and that the\n5.2 Spam Filtering confidence of the prediction be ≥p. The accuracies for our traditional, unconstrained models were For MNIST we will use LeNet. Our out of sample accuhigh on both datasets, but both were susceptible to our version racy using a normal model is 99.2%, while the model with\nof Lowd and Meek's \"Good Word\" attack. Both classifiers non-negative constrained dense layers achieves 98.6%. For\nwere evaded 100% of the time by appending only 7 words CIFAR 10 and 100 we use a ResNet based architecture.3\nto each message in the 2006 case and only 4 words in the For CIFAR 10 we get 92.3% accuracy normally, and 91.6%\n2007 case.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 22, |
| "total_chunks": 45, |
| "char_count": 1071, |
| "word_count": 187, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ca4d297d-50b8-42ea-857f-fcc3565cec9b", |
| "text": "These words correspond to the features with the with our non-negative approach. On CIFAR 100 the same\nlowest regression coefficients (i.e., negative values with high architecture gets 72.2% accuracy normally, and 71.7% with\nmagnitude) for each model. our non-negative approach. For Tiny ImageNet, we also use a\nUse of the non-negative constraint lowers our accuracy for ResNet architecture with the weights of all but the final dense\nboth datasets when not under attack, but completely elimi- layers initialized pretrained from ImageNet.4 The normal\nnates susceptibility to these attacks as all \"Good Words\" have model has an accuracy of 56.6%, and the constrained model\ncoefficients of 0.0.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 23, |
| "total_chunks": 45, |
| "char_count": 691, |
| "word_count": 106, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "52938478-1262-44f2-8fd8-7b9f332f998b", |
| "text": "The spam author would only be able to 56.3%. The results as a function of the target confidence p\nevade detection by removing words indicative of spam from can be seen in Figure 4.\ntheir message. A comparison of performance is shown in An interesting artifact of our approach is that the nonTable 2. negative networks are easier to fool for low-confidence errors. We posit this is due to the probability distribution over classes\nTable 2: Out of sample performance on spam filtering in the becoming near uniform under attack. On CIFAR100 the yabsence of attack. axis is truncated for legibility since the evasion rate of FGSM\nis 93% and IGA is 99%. Similarly for non-negative Tiny\nClassifier Accuracy % Precision Recall AUC % F1 Score ImageNet, FGSM and IGA achieve 14% and 17% evasion\n2006 Lasso 96.5 0.974 0.993 97.1 0.983 rates when p = 0.005.\n2006 Non-Neg. 82.6 0.912 0.820 83.5 0.864 Despite these initial high evasion rates, we can see in\n2007 Lasso 99.7 0.999 0.999 99.7 0.999\n2007 Non-Neg. 93.6 0.962 0.940 93.0 0.951 all cases the success of targeted adversarial attacks reaches\n0% as the desired probability p increases. For MNIST and\nCIFAR10, which have only 10 classes, this occurs at up to\nDespite the drops in accuracy imposed by our non-negative a target 30% confidence. As more classes are added, the\nconstraint, the results are better than prior works in defending difficulty of the attack increases. For Tiny ImageNet and\nagainst weaker versions of the \"Good Word\" attack. For ex- CIFAR100, targeted attacks fail by ≤2%.\nample, Jorgensen, Zhou, and Inge (2008) developed a defense If targeted attacks from IGA were the only type of attack\nbased on multiple instance learning. Their approach when we needed to worry about, these results would also allow\nattacked with all of their selected good words had a precision us to use the confidence as a method of detecting attacks.\nof 0.772 and a recall of 0.743 on the 2006 TREC corpus. For example, CIFAR10 had the weakest results, needing a\nThis was the best result of all their tested methods, but our target confidence of 30% before targeted attacks failed. The\nnon-negative approach achieves a superior 0.912 and 0.820 average predicted confidence of the non-negative network on\nprecision and recall respectively. While spam authors are not the test set was 93.8%.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 24, |
| "total_chunks": 45, |
| "char_count": 2331, |
| "word_count": 394, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "83916f9a-04c7-4c31-89c4-a4b50054008b", |
| "text": "This means we can use the confidence\nas restricted to modify their inputs, our approach forces them itself as a measure of network robustness. If we default to\nto move up to a more expensive threat model (removing and a \"no-answer\" for everything with a confidence of 40% or\nmodifying features, rather than just adding) — which we less on CIFAR10, and assume anything below that level is an\nargue is of intrinsic value. attack and error, the accuracy would have only gone down\nDemontis et al. (2017) had concluded that there existed an 1.2%.\n\"implicit trade-off between security and sparsity\" in building In order to determine if non-negative constraints are merely\na secure model in their Android malware work. At least for acting as a gradient obfuscation technique (Athalye, Carlini,\nthe additive adversary, we provide evidence with our byte and Wagner, 2018), we also attempted a black box attack by\nn-grams and spam models that this is not an absolute. In both attacking a substitute model without the non-negative concases we begin with a full feature set and the non-negative straints and assessing whether the perturbed images created\napproach learns a sparse model, where all \"good words\" (or by this attack were able to fool the constrained model. In orbytes) are given coefficient values of zero.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 25, |
| "total_chunks": 45, |
| "char_count": 1307, |
| "word_count": 217, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8d00585a-dbad-458b-87de-c1d2207f1122", |
| "text": "As such we see der to make the attack as strong as possible, the unconstrained\nthat sparsity and security occur together to defend against the network was the same as the network that was used to warmadditive adversary.\n3v1 model taken from https://tinyurl.com/\n5.3 Image Classification keras-cifar10-restnet\nHaving investigated the performance of non-negative learn- 4ResNet50 built-in application from https://keras.io/\ning for malware detection, we now look at its potential for applications/#resnet50 MNIST CIFAR10 CIFAR100 Tiny ImageNet\n30 30 8 8\nSoftmax FGSM\nSoftmax IGA\nNon-Neg FGSM 6 6\nRate 20 Non-Neg IGA 20\n4 4 20 40 60 80 100 20 40 60 80 100 10−1 100 101 102 100 101 102\nTarget Confidence Target Confidence Target Confidence Target Confidence Figure 4: Targeted evasion rate (y-axis) as a function of the desired misclassification confidence p (x-axis) for four datasets. Due\nto the differing ranges of interest, right two figures shown in log scale for the x-axis.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 26, |
| "total_chunks": 45, |
| "char_count": 976, |
| "word_count": 156, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e03725d4-d3b1-4db5-be40-90c1036d7d1f", |
| "text": "start the non-negative training. This should maximize the ing Defenses to Adversarial Examples. In International\ntransferability of attacks from one to the other. Despite this Conference on Machine Learning (ICML).\nsimilarity, transfered attacks had only a 1.042% success rate, Biggio, B.; Fumera, G.; and Roli, F. 2014. Security evaluation\nwhich is one reason we believe that non-negative constraints of pattern classifiers under attack. IEEE Transactions on\nare not merely a form of gradient obfuscation. Knowledge and Data Engineering 26(4):984–996.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 27, |
| "total_chunks": 45, |
| "char_count": 552, |
| "word_count": 79, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2ea74a6a-7766-4ea5-9560-732ecfdc6f7c", |
| "text": "We emphasize that these results are evidence that we can\nextend non-negativity to provide benefit in the multi-class Carlini, N., and Wagner, D. 2017a. Adversarial Examples Are\ncase. Our approach appears to have lower cost in multi-class Not Easily Detected: Bypassing Ten Detection Methods. Intelligence and Security, AISec '17, 3–14. New York,in each dataset. While the cost is lower, its utility is lower\nas well.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 28, |
| "total_chunks": 45, |
| "char_count": 416, |
| "word_count": 65, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e4fba740-d933-40ed-a0ba-afd941f7bc2d", |
| "text": "Our multi-class non-negative approach provides no NY, USA: ACM.\nbenefit in untargeted attacks — where any error by the model Carlini, N., and Wagner, D. 2017b. Towards Evaluating the\nis acceptable to the attacker — even under the weaker FGSM Robustness of Neural Networks. In 2017 IEEE Symposium\nattack. When even stronger attacks like Projected Gradient on Security and Privacy (SP), 39–57. Descent are used, our approach is also defeated in the tarChistyakov, A.; Lobacheva, E.; Kuznetsov, A.; and Romageted scenario. Under the moderate-strength IGA attack, we\nnenko, A. 2017. Semantic Embeddings for Program bealso see that susceptibility to evasion is increased for lowhavior Patterns. In ICLR Workshop.\nconfidence evasions.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 29, |
| "total_chunks": 45, |
| "char_count": 728, |
| "word_count": 110, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2115b3db-4d76-4188-b7ba-ea478f57289b", |
| "text": "In total, we view these results as indicative that non-negativity can have utility for the multi-class Chollet, F. 2015. Keras.\ncase and provide some level of benefit that is intrinsically Chorowski, J., and Zurada, J. Learning Uninteresting, but more work is needed to determine a better derstandable Neural Networks With Nonnegative Weight\nway to apply the technique. IEEE Transactions on Neural Networks and\nLearning Systems 26(1):62–69.\n6 Conclusion Dalvi, N.; Domingos, P.; Mausam; Sanghai, S.; and Verma,\nWe have shown that an increased robustness to adversar- D. 2004. Adversarial Classification. In Proceedings\nial examples can be achieved through non-negative weight of the Tenth ACM SIGKDD International Conference on\nconstraints. Constrained binary classifiers can only identify Knowledge Discovery and Data Mining, KDD '04, 99–108.\nfeatures associated with the positive class during test time. New York, NY, USA: ACM.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 30, |
| "total_chunks": 45, |
| "char_count": 929, |
| "word_count": 137, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b9d7aff4-41a2-4a9d-911b-c5f1c4c7eb0d", |
| "text": "Therefore, the only method for fooling the model is to remove Demontis, A.; Melis, M.; Biggio, B.; Maiorca, D.; Arp, D.;\nfeatures associated with that class. This method is particu- Rieck, K.; Corona, I.; Giacinto, G.; and Roli, F. 2017. Yes,\nlarly useful in security-centric domains like malware detec- Machine Learning Can Be More Secure! A Case Study\ntion, which have well-known adversarial motivation. Forcing on Android Malware Detection. IEEE Transactions on\nadversaries to remove maliciousness in these domains is the Dependable and Secure Computing 1–1.\ndesired outcome. We have also described a technique to generalize this robustness to multi-class domains such as image Ferrand, O., and Filiol, E. 2016. Combinatorial detection\nclassification. We showed a significant increase in robustness of malware by IAT discrimination. Journal of Computer\nto targeted adversarial attacks while minimizing the amount Virology and Hacking Techniques 12(3):131–136.\nof accuracy lost in doing so. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International\nReferences Conference on Learning Representations (ICLR). Athalye, A.; Carlini, N.; and Wagner, D. 2018.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 31, |
| "total_chunks": 45, |
| "char_count": 1200, |
| "word_count": 175, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "308c0547-f14e-4ae7-891e-c208fb93ad8d", |
| "text": "Obfuscated Grosse, K.; Papernot, N.; Manoharan, P.; Backes, M.; and\nGradients Give a False Sense of Security: Circumvent- McDaniel, P. Adversarial perturbations against deep neural networks for malware classification. CoRR Russu, P.; Demontis, A.; Biggio, B.; Fumera, G.; and Roli, F.\nabs/1606.04435. 2016. Secure Kernel Machines Against Evasion Attacks. Intelligence and Security, AISec '16, 59–69. New York, Instance Learning Strategy for Combating Good Word\nNY, USA: ACM. Attacks on Spam Filters. Sahs, J., and Khan, L. 2012. A Machine Learning Approach\nto Android Malware Detection.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 32, |
| "total_chunks": 45, |
| "char_count": 586, |
| "word_count": 85, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e5836da4-3056-4a0e-a7fc-62afbb3b0c81", |
| "text": "In 2012 European Intel-Kołcz, A., and Teo, C. Feature Weighting for\nligence and Security Informatics Conference, 141–147. Improved Classifier Robustness. In 6th Conference on\nIEEE. Email and Anti-Spam (CEAS'09).", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 33, |
| "total_chunks": 45, |
| "char_count": 211, |
| "word_count": 29, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6b6f20db-fe9d-457a-83a3-dc90c765a0b2", |
| "text": "Saxe, J., and Berlin, K. 2015. Deep Neural Network BasedKolosnjaji, B.; Demontis, A.; Biggio, B.; Maiorca, D.; GiMalware Detection Using Two Dimensional Binary Pro- acinto, G.; Eckert, C.; and Roli, F. 2018. Adversarial\ngram Features. Malware Binaries: Evading Deep Learning for Malware\nDetection in Executables. In 26th European Signal Pro- Suciu, O.; Coull, S. E.; and Johns, J. 2018.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 34, |
| "total_chunks": 45, |
| "char_count": 386, |
| "word_count": 59, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "62922365-e9f4-466c-b4f3-49757dd28a89", |
| "text": "Exploring Advercessing Conference (EUSIPCO '18). sarial Examples in Malware Detection. In AAAI 2018 Fall\nSymposium Series: Adversary-Aware Learning TechniquesKreuk, F.; Barak, A.; Aviv-Reuven, S.; Baruch, M.; Pinkas,\nand Trends in Cybersecurity (ALEC). B.; and Keshet, J. 2018.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 35, |
| "total_chunks": 45, |
| "char_count": 277, |
| "word_count": 37, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4f3817bb-eb6d-420b-9f4b-c4a1c091bc14", |
| "text": "Adversarial Examples on Discrete\nSequences for Beating Whole-Binary Malware Detection. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.;\narXiv preprint. Goodfellow, I.; and Fergus, R. 2014. Intriguing properties\nof neural networks. In ICLR.Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017a. Adversarial examples in the physical world. In International Ugarte-Pedrero, X.; Balzarotti, D.; Santos, I.; and Bringas,\nConference on Learning Representations (ICLR). RAMBO: Run-Time Packer Analysis with\nMultiple Branch Observation. In Proceedings of the 13thKurakin, A.; Goodfellow, I.; and Bengio, S. 2017b. AdversarInternational Conference on Detection of Intrusions and ial Machine Learning at Scale. In International Conference\nMalware, and Vulnerability Assessment - Volume 9721, on Learning Representations (ICLR). New York, NY, USA: SpringerLowd, D., and Meek, C. 2005a.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 36, |
| "total_chunks": 45, |
| "char_count": 884, |
| "word_count": 119, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9bfb78ea-64b8-4149-b80d-2367461cf59b", |
| "text": "Adversarial Learning. In Pro- Verlag New York, Inc.\nceedings of the Eleventh ACM SIGKDD International ConYih, S. W.-t.; Goodman, J.; and Hulten, G. 2006. Learning ference on Knowledge Discovery in Data Mining, KDD\nat Low False Positive Rates. In Proceedings of the 3rd '05, 641–647. New York, NY, USA: ACM. Conference on Email and Anti-Spam.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 37, |
| "total_chunks": 45, |
| "char_count": 341, |
| "word_count": 55, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "85027064-6b4c-48f9-a7ae-2a47616ab76d", |
| "text": "Lowd, D., and Meek, C. 2005b. Good Word Attacks on\nYuan, X.; He, P.; Zhu, Q.; Bhat, R. Statistical Spam Filters. In Conference on email and antiAdversarial Examples: Attacks and Defenses for Deep spam (CEAS), 125–132.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 38, |
| "total_chunks": 45, |
| "char_count": 217, |
| "word_count": 36, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1b1eb6dd-f054-4d65-9174-594d4b00515d", |
| "text": "Maiorca, D.; Biggio, B.; and Giacinto, G. 2018. Towards RoZhou, Y., and Inge, W. Malware Detection Using bust Detection of Adversarial Infection Vectors: Lessons\nAdaptive Data Compression. In Proceedings of the 1st Learned in PDF Malware. arXiv preprint. Pascanu, R.; Stokes, J. W.; Sanossian, H.; Marinescu, M.; and New York, NY, USA: ACM. Malware Classification With Recurrent\nZhou, Y.; Jorgensen, Z.; and Inge, M. 2007. Combating Good\nNetworks. IEEE - Institute of Electrical and Electronics\nWord Attacks on Statistical Spam Filters with Multiple Engineers. In 19th IEEE International Conference\nRaff, E., and Nicholas, C. 2017. Malware Classification and on Tools with Artificial Intelligence(ICTAI 2007), 298–305. Class Imbalance via Stochastic Hashed LZJD.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 39, |
| "total_chunks": 45, |
| "char_count": 762, |
| "word_count": 111, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5b02b5b4-143e-4f5a-ab77-6f73e64bc42e", |
| "text": "In Proceed- IEEE.\nand Security, AISec '17, 111–120. New York, NY, USA:\nACM. Raff, E.; Zak, R.; Cox, R.; Sylvester, J.; Yacci, P.; Ward, R.;\nTracy, A.; McLean, M.; and Nicholas, C. 2016. An investigation of byte n-gram features for malware classification. Journal of Computer Virology and Hacking Techniques. Raff, E.; Barker, J.; Sylvester, J.; Brandon, R.; Catanzaro, B.;\nand Nicholas, C. 2018.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 40, |
| "total_chunks": 45, |
| "char_count": 395, |
| "word_count": 63, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b2af9400-e863-4e20-a7b4-34c480afbeb7", |
| "text": "Malware Detection by Eating a\nfor Cyber Security. Rauber, J.; Brendel, W.; and Bethge, M. 2017. Foolbox: A\nPython toolbox to benchmark the robustness of machine\nlearning models. Appendix A Threshold for CIFAR\nAdversaries\nWhen creating an adversarial attack, it is necessary that some\nportion of the image must be changed as an intrinsic part of\nthe attack. There is currently considerable debate about the\nnature of how large that change ought to be, how it should be\nmeasured, and how much we should care about the nature of\nchanges to the original image. All of these could be topics of\nresearch in their own right, and we do not claim to solve them\nin this work.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 41, |
| "total_chunks": 45, |
| "char_count": 665, |
| "word_count": 117, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f6a11b08-0940-48e7-8b7f-7f74c005cc3f", |
| "text": "We use the L1 distance as a measure simply\nbecause it has been used by prior works, even if it is not ideal. We also take the stance that it is important that no perceptible difference between the input and attacked result is\nimportant, while recognizing that is not an agreed upon procedure by everyone in the community. Intuitively we find\nthe lack of perceptible difference important because it leaves\nno ambiguity about the ground truth label. If the input is\nnoticeably perturbed by an attack, the true label of the new\nattacked image may come into question. Our CIFAR 10 &\n100 results against the non-negative networks often produced\nlarge differences, brining us to a need to impose a threshold\nat which we will consider an attack a \"failure.\"\nFigure 5 highlights the need to choose a threshold by providing examples of the spectrum of L1 distances we observed\nbetween the original and the adversarial-generated images. It starts with small distances, such as Figure 5a, which has\nonly an L1 distance of 10 and is clearly still a truck. At the\nextreme end, we also had results like Figure 5f which had an\nL1 distance of 1000, and is wholly unrecognizable. We argue\nthat such an attack must be a failure, as the input does not\neven resemble the distribution of images from CIFAR.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 42, |
| "total_chunks": 45, |
| "char_count": 1285, |
| "word_count": 225, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f8836e91-5de0-4f59-b0ed-aa3f061fdfa1", |
| "text": "The question is, where does one draw the line? While we\nargue that an imperceptible difference is important to avoid\nlabel ambiguity, we have attempted to give deference in allowing large magnitude attacks while also recognizing that\nL1 is not the ideal method to measure visual perceptual difference. As such we have experimentally selected a threshold\nof 60 as one that allows for perceptible differences, and at\nthe edge of no longer being recognizable as its original class. Our decision to use a threshold of 60 is best shown in\nFigure 5c where the L1 difference starts to demonstrate an\nobvious perceptible difference. We feel this image represents\na balance between the subjective ability of being able to\nstill tell that it is a type of car / truck, and having difficulty\nrecognizing what is in the image (or if it is still valid) without\nthe context of the original image next to it. Allowing larger thresholds for the CIFAR attacks begins to\nenter a territory where it is not clear to us that the true label\nof the image has been retained. Figure 5d shows one such\nexample with a deer, where the adversarial image has the\nsame colors but is unclear to us what the adversarial image\nshould be labeled as.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 43, |
| "total_chunks": 45, |
| "char_count": 1213, |
| "word_count": 213, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9649854a-6a9a-424d-9a0d-3fa5448932f8", |
| "text": "(a) L1 difference of 10, no perceptible difference. (b) L1 difference of 30, minute perceptible difference. (c) L1 difference of 60, obvious perceptible difference, though objects (d) L1 difference of 150, significant artifacts emerging in attack, origiappear mostly \"the same\". nal object is generally unrecognizable. (e) L1 difference of 400, original object is no longer recognizable. (f) L1 difference of 1000, the image has been completely destroyed. Figure 5: Examples of IGA attacks against CIFAR 10 images with our non-negative network. In each sub figure, the left most\nimage is the original image, the middle is the attacked result, and the right shows the difference. Moving from sub-figure (a) to\n(f), the L1 difference between the original and adversarial image increases.", |
| "paper_id": "1806.06108", |
| "title": "Non-Negative Networks Against Adversarial Attacks", |
| "authors": [ |
| "William Fleshman", |
| "Edward Raff", |
| "Jared Sylvester", |
| "Steven Forsyth", |
| "Mark McLean" |
| ], |
| "published_date": "2018-06-15", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.06108v2", |
| "chunk_index": 44, |
| "total_chunks": 45, |
| "char_count": 785, |
| "word_count": 121, |
| "chunking_strategy": "semantic" |
| } |
| ] |