text stringlengths 81 112k |
|---|
Computes the gradient of the loss with respect to the input tensor.
:param model_fn: a callable that takes an input tensor and returns the model logits.
:param x: input tensor
:param y: Tensor with true labels. If targeted is true, then provide the target label.
:param targeted: bool. Is the attack targeted or... |
Solves for the optimal input to a linear function under a norm constraint.
Optimal_perturbation = argmax_{eta, ||eta||_{ord} < eps} dot(eta, grad)
:param grad: tf tensor containing a batch of gradients
:param eps: float scalar specifying size of constraint region
:param ord: int specifying order of norm
:re... |
Saves a pdf of the current matplotlib figure.
:param path: str, filepath to save to
def save_pdf(path):
"""
Saves a pdf of the current matplotlib figure.
:param path: str, filepath to save to
"""
pp = PdfPages(path)
pp.savefig(pyplot.gcf())
pp.close() |
Helper function to clip the perturbation to epsilon norm ball.
:param eta: A tensor with the current perturbation.
:param ord: Order of the norm (mimics Numpy).
Possible values: np.inf, 1 or 2.
:param eps: Epsilon, bound of the perturbation.
def clip_eta(eta, ord, eps):
"""
Helper function to c... |
Define and train a model that simulates the "remote"
black-box oracle described in the original paper.
:param sess: the TF session
:param x: the input placeholder for MNIST
:param y: the ouput placeholder for MNIST
:param x_train: the training data for the oracle
:param y_train: the training labels for the ... |
This function creates the substitute by alternatively
augmenting the training data and training the substitute.
:param sess: TF session
:param x: input TF placeholder
:param y: output TF placeholder
:param bbox_preds: output of black-box model predictions
:param x_sub: initial substitute training data
:pa... |
MNIST tutorial for the black-box attack from arxiv.org/abs/1602.02697
:param train_start: index of first training set example
:param train_end: index of last training set example
:param test_start: index of first test set example
:param test_end: index of last test set example
:return: a dictionary with:
... |
Pad a single image and then crop to the original size with a random
offset.
def random_shift(x, pad=(4, 4), mode='REFLECT'):
"""Pad a single image and then crop to the original size with a random
offset."""
assert mode in 'REFLECT SYMMETRIC CONSTANT'.split()
assert x.get_shape().ndims == 3
xp = tf.pad(x, [... |
Apply dataset augmentation to a batch of exmaples.
:param x: Tensor representing a batch of examples.
:param func: Callable implementing dataset augmentation, operating on
a single image.
:param device: String specifying which device to use.
def batch_augment(x, func, device='/CPU:0'):
"""
Apply dataset ... |
Augment a batch by randomly cropping and horizontally flipping it.
def random_crop_and_flip(x, pad_rows=4, pad_cols=4):
"""Augment a batch by randomly cropping and horizontally flipping it."""
rows = tf.shape(x)[1]
cols = tf.shape(x)[2]
channels = x.get_shape()[3]
def _rand_crop_img(img):
"""Randomly cr... |
MNIST cleverhans tutorial
:param train_start: index of first training set example
:param train_end: index of last training set example
:param test_start: index of first test set example
:param test_end: index of last test set example
:param nb_epochs: number of epochs to train model
:param batch_size: size ... |
Project `perturbation` onto L-infinity ball of radius `epsilon`.
Also project into hypercube such that the resulting adversarial example
is between clip_min and clip_max, if applicable.
def _project_perturbation(perturbation, epsilon, input_image, clip_min=None,
clip_max=None):
"""Proje... |
Computes difference between logit for `label` and next highest logit.
The loss is high when `label` is unlikely (targeted by default).
This follows the same interface as `loss_fn` for TensorOptimizer and
projected_optimization, i.e. it returns a batch of loss values.
def margin_logit_loss(model_logits, label, n... |
TensorFlow implementation of the Spatial Transformation Method.
:return: a tensor for the adversarial example
def spm(x, model, y=None, n_samples=None, dx_min=-0.1,
dx_max=0.1, n_dxs=5, dy_min=-0.1, dy_max=0.1, n_dys=5,
angle_min=-30, angle_max=30, n_angles=31, black_border_size=0):
"""
TensorFlo... |
Apply image transformations in parallel.
:param transforms: TODO
:param black_border_size: int, size of black border to apply
Returns:
Transformed images
def parallel_apply_transformations(x, transforms, black_border_size=0):
"""
Apply image transformations in parallel.
:param transforms: TODO
:param... |
Generic projected optimization, generalized to work with approximate
gradients. Used for e.g. the SPSA attack.
Args:
:param loss_fn: A callable which takes `input_image` and `label` as
arguments, and returns a batch of loss values. Same
interface as TensorOptimizer.
... |
Generate symbolic graph for adversarial examples.
:param x: The model's symbolic inputs. Must be a batch of size 1.
:param y: A Tensor or None. The index of the correct label.
:param y_target: A Tensor or None. The index of the target label in a
targeted attack.
:param eps: The siz... |
Compute a new value of `x` to minimize `loss_fn`.
Args:
loss_fn: a callable that takes `x`, a batch of images, and returns
a batch of loss values. `x` will be optimized to minimize
`loss_fn(x)`.
x: A list of Tensors, the values to be updated. This is analogous
to... |
Analogous to tf.Optimizer.minimize
:param loss_fn: tf Tensor, representing the loss to minimize
:param x: list of Tensor, analogous to tf.Optimizer's var_list
:param optim_state: A possibly nested dict, containing any optimizer state.
Returns:
new_x: list of Tensor, updated version of `x`
... |
Initialize t, m, and u
def init_state(self, x):
"""
Initialize t, m, and u
"""
optim_state = {}
optim_state["t"] = 0.
optim_state["m"] = [tf.zeros_like(v) for v in x]
optim_state["u"] = [tf.zeros_like(v) for v in x]
return optim_state |
Refer to parent class documentation.
def _apply_gradients(self, grads, x, optim_state):
"""Refer to parent class documentation."""
new_x = [None] * len(x)
new_optim_state = {
"t": optim_state["t"] + 1.,
"m": [None] * len(x),
"u": [None] * len(x)
}
t = new_optim_state["t"]
... |
Compute gradient estimates using SPSA.
def _compute_gradients(self, loss_fn, x, unused_optim_state):
"""Compute gradient estimates using SPSA."""
# Assumes `x` is a list, containing a [1, H, W, C] image
# If static batch dimension is None, tf.reshape to batch size 1
# so that static shape can be inferr... |
Parses command line arguments.
def parse_args():
"""Parses command line arguments."""
parser = argparse.ArgumentParser(
description='Tool to run attacks and defenses.')
parser.add_argument('--attacks_dir', required=True,
help='Location of all attacks.')
parser.add_argument('--target... |
Scans directory and read all submissions.
Args:
dirname: directory to scan.
use_gpu: whether submissions should use GPU. This argument is
used to pick proper Docker container for each submission and create
instance of Attack or Defense class.
Returns:
List with submissions (subclasses of S... |
Loads output of defense from given file.
def load_defense_output(filename):
"""Loads output of defense from given file."""
result = {}
with open(filename) as f:
for row in csv.reader(f):
try:
image_filename = row[0]
if image_filename.endswith('.png') or image_filename.endswith('.jpg'):
... |
Computes scores and ranking and saves it.
Args:
attacks_output: output of attacks, instance of AttacksOutput class.
defenses_output: outputs of defenses. Dictionary of dictionaries, key in
outer dictionary is name of the defense, key of inner dictionary is
name of the image, value of inner dictio... |
Run all attacks against all defenses and compute results.
def main():
"""Run all attacks against all defenses and compute results.
"""
args = parse_args()
attacks_output_dir = os.path.join(args.intermediate_results_dir,
'attacks_output')
targeted_attacks_output_dir = os.pa... |
Runs attack inside Docker.
Args:
input_dir: directory with input (dataset).
output_dir: directory where output (adversarial images) should be written.
epsilon: maximum allowed size of adversarial perturbation,
should be in range [0, 255].
def run(self, input_dir, output_dir, epsilon):
... |
Helper method which loads dataset and determines clipping range.
Args:
dataset_dir: location of the dataset.
epsilon: maximum allowed size of adversarial perturbation.
def _load_dataset_clipping(self, dataset_dir, epsilon):
"""Helper method which loads dataset and determines clipping range.
A... |
Clips results of attack and copy it to directory with all images.
Args:
attack_name: name of the attack.
is_targeted: if True then attack is targeted, otherwise non-targeted.
def clip_and_copy_attack_outputs(self, attack_name, is_targeted):
"""Clips results of attack and copy it to directory with ... |
Saves target classed for all dataset images into given file.
def save_target_classes(self, filename):
"""Saves target classed for all dataset images into given file."""
with open(filename, 'w') as f:
for k, v in self._target_classes.items():
f.write('{0}.png,{1}\n'.format(k, v)) |
A reasonable attack bundling recipe for a max norm threat model and
a defender that uses confidence thresholding. This recipe uses both
uniform noise and randomly-initialized PGD targeted attacks.
References:
https://openreview.net/forum?id=H1g0piA9tQ
This version runs each attack (noise, targeted PGD for e... |
Max confidence using random search.
References:
https://openreview.net/forum?id=H1g0piA9tQ
Describes the max_confidence procedure used for the bundling in this recipe
https://arxiv.org/abs/1802.00420
Describes using random search with 1e5 or more random points to avoid
gradient masking.
:param ses... |
Runs attack bundling.
Users of cleverhans may call this function but are more likely to call
one of the recipes above.
Reference: https://openreview.net/forum?id=H1g0piA9tQ
:param sess: tf.session.Session
:param model: cleverhans.model.Model
:param x: numpy array containing clean example inputs to attack
... |
Runs attack bundling, working on one specific AttackGoal.
This function is mostly intended to be called by `bundle_attacks`.
Reference: https://openreview.net/forum?id=H1g0piA9tQ
:param sess: tf.session.Session
:param model: cleverhans.model.Model
:param x: numpy array containing clean example inputs to att... |
Runs attack bundling on one batch of data.
This function is mostly intended to be called by
`bundle_attacks_with_goal`.
:param sess: tf.session.Session
:param model: cleverhans.model.Model
:param x: numpy array containing clean example inputs to attack
:param y: numpy array containing true labels
:param ... |
Saves the report and adversarial examples.
:param criteria: dict, of the form returned by AttackGoal.get_criteria
:param report: dict containing a confidence report
:param report_path: string, filepath
:param adv_x_val: numpy array containing dataset of adversarial examples
def save(criteria, report, report_pa... |
Returns a list of attack configs that have not yet been run the desired
number of times.
:param new_work_goal: dict mapping attacks to desired number of times to run
:param work_before: dict mapping attacks to number of times they were run
before starting this new goal. Should be prefiltered to include only
... |
A post-processor version of attack bundling, that chooses the strongest
example from the output of multiple earlier bundling strategies.
:param sess: tf.session.Session
:param model: cleverhans.model.Model
:param adv_x_list: list of numpy arrays
Each entry in the list is the output of a previous bundler; i... |
Runs the MaxConfidence attack using SPSA as the underlying optimizer.
Even though this runs only one attack, it must be implemented as a bundler
because SPSA supports only batch_size=1. The cleverhans.attacks.MaxConfidence
attack internally multiplies the batch size by nb_classes, so it can't take
SPSA as a ba... |
Returns a dictionary mapping the name of each criterion to a NumPy
array containing the value of that criterion for each adversarial
example.
Subclasses can add extra criteria by implementing the `extra_criteria`
method.
:param sess: tf.session.Session
:param model: cleverhans.model.Model
:... |
Returns a numpy array of integer example indices to run in the next batch.
def request_examples(self, attack_config, criteria, run_counts, batch_size):
"""
Returns a numpy array of integer example indices to run in the next batch.
"""
raise NotImplementedError(str(type(self)) +
... |
Returns a bool indicating whether a new adversarial example is better
than the pre-existing one for the same clean example.
:param orig_criteria: dict mapping names of criteria to their value
for each example in the whole dataset
:param orig_idx: The position of the pre-existing example within the
... |
Return run counts only for examples that are still correctly classified
def filter(self, run_counts, criteria):
"""
Return run counts only for examples that are still correctly classified
"""
correctness = criteria['correctness']
assert correctness.dtype == np.bool
filtered_counts = deep_copy(r... |
Return the counts for only those examples that are below the threshold
def filter(self, run_counts, criteria):
"""
Return the counts for only those examples that are below the threshold
"""
wrong_confidence = criteria['wrong_confidence']
below_t = wrong_confidence <= self.t
filtered_counts = de... |
This class implements either the Basic Iterative Method
(Kurakin et al. 2016) when rand_init is set to 0. or the
Madry et al. (2017) method when rand_minmax is larger than 0.
Paper link (Kurakin et al. 2016): https://arxiv.org/pdf/1607.02533.pdf
Paper link (Madry et al. 2017): https://arxiv.org/pdf/1706.06083.p... |
Clip an image, or an image batch, with upper and lower threshold.
def clip_image(image, clip_min, clip_max):
""" Clip an image, or an image batch, with upper and lower threshold. """
return np.minimum(np.maximum(clip_min, image), clip_max) |
Compute the distance between two images.
def compute_distance(x_ori, x_pert, constraint='l2'):
""" Compute the distance between two images. """
if constraint == 'l2':
dist = np.linalg.norm(x_ori - x_pert)
elif constraint == 'linf':
dist = np.max(abs(x_ori - x_pert))
return dist |
Gradient direction estimation
def approximate_gradient(decision_function, sample, num_evals,
delta, constraint, shape, clip_min, clip_max):
""" Gradient direction estimation """
# Generate random vectors.
noise_shape = [num_evals] + list(shape)
if constraint == 'l2':
rv = np.random... |
Projection onto given l2 / linf balls in a batch.
def project(original_image, perturbed_images, alphas, shape, constraint):
""" Projection onto given l2 / linf balls in a batch. """
alphas_shape = [len(alphas)] + [1] * len(shape)
alphas = alphas.reshape(alphas_shape)
if constraint == 'l2':
projected = (1-a... |
Binary search to approach the boundary.
def binary_search_batch(original_image, perturbed_images, decision_function,
shape, constraint, theta):
""" Binary search to approach the boundary. """
# Compute distance between each of perturbed image and original image.
dists_post_update = np.ar... |
Efficient Implementation of BlendedUniformNoiseAttack in Foolbox.
def initialize(decision_function, sample, shape, clip_min, clip_max):
"""
Efficient Implementation of BlendedUniformNoiseAttack in Foolbox.
"""
success = 0
num_evals = 0
# Find a misclassified random noise.
while True:
random_noise = ... |
Geometric progression to search for stepsize.
Keep decreasing stepsize by half until reaching
the desired side of the boundary.
def geometric_progression_for_stepsize(x, update, dist, decision_function,
current_iteration):
""" Geometric progression to search for ste... |
Choose the delta at the scale of distance
between x and perturbed sample.
def select_delta(dist_post_update, current_iteration,
clip_max, clip_min, d, theta, constraint):
"""
Choose the delta at the scale of distance
between x and perturbed sample.
"""
if current_iteration == 1:
delt... |
Return a tensor that constructs adversarial examples for the given
input. Generate uses tf.py_func in order to operate over tensors.
:param x: A tensor with the inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Return a tensor that constructs adversarial examples for th... |
Generate adversarial images in a for loop.
:param y: An array of shape (n, nb_classes) for true labels.
:param y_target: An array of shape (n, nb_classes) for target labels.
Required for targeted attack.
:param image_target: An array of shape (n, **image shape) for initial
target images. Required f... |
:param y: A tensor of shape (1, nb_classes) for true labels.
:param y_target: A tensor of shape (1, nb_classes) for target labels.
Required for targeted attack.
:param image_target: A tensor of shape (1, **image shape) for initial
target images. Required for targeted attack.
:param initial_num_eval... |
Main algorithm for Boundary Attack ++.
Return a tensor that constructs adversarial examples for the given
input. Generate uses tf.py_func in order to operate over tensors.
:param sample: input image. Without the batchsize dimension.
:param target_label: integer for targeted attack,
None for nont... |
Take in a dictionary of parameters and applies attack-specific checks
before saving them as attributes.
Attack-specific parameters:
:param layer: (required str) name of the layer to target.
:param eps: (optional float) maximum distortion of adversarial example
compared to original inpu... |
TensorFlow implementation of the Fast Feature Gradient. This is a
single step attack similar to Fast Gradient Method that attacks an
internal representation.
:param x: the input placeholder
:param eta: A tensor the same shape as x that holds the perturbation.
:param g_feat: model's internal tensor ... |
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param g: The target value of the symbolic representation
:param kwargs: See `parse_params`
def generate(self, x, g, **kwargs):
"""
Generate symbolic graph for adversarial examples and return.
... |
Make a confidence report and save it to disk.
def main(argv=None):
"""
Make a confidence report and save it to disk.
"""
try:
_name_of_script, filepath = argv
except ValueError:
raise ValueError(argv)
print(filepath)
make_confidence_report_bundled(filepath=filepath,
... |
Builds the 35x35 resnet block.
def block35(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 35x35 resnet block."""
with tf.variable_scope(scope, 'Block35', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 32, 1, scope='Conv2d_1x1')
... |
Builds the 17x17 resnet block.
def block17(net, scale=1.0, activation_fn=tf.nn.relu, scope=None, reuse=None):
"""Builds the 17x17 resnet block."""
with tf.variable_scope(scope, 'Block17', [net], reuse=reuse):
with tf.variable_scope('Branch_0'):
tower_conv = slim.conv2d(net, 192, 1, scope='Conv2d_1x1')
... |
Inception model from http://arxiv.org/abs/1602.07261.
Constructs an Inception Resnet v2 network from inputs to the given final
endpoint. This method can construct the network up to the final inception
block Conv2d_7b_1x1.
Args:
inputs: a tensor of size [batch_size, height, width, channels].
final_end... |
Creates the Inception Resnet V2 model.
Args:
inputs: a 4-D tensor of size [batch_size, height, width, 3].
nb_classes: number of predicted classes.
is_training: whether is training or not.
dropout_keep_prob: float, the fraction to keep before final layer.
reuse: whether or not the network and its ... |
Returns the scope with the default parameters for inception_resnet_v2.
Args:
weight_decay: the weight decay for weights variables.
batch_norm_decay: decay for the moving average of batch_norm momentums.
batch_norm_epsilon: small float added to variance to avoid dividing by zero.
Returns:
a arg_sco... |
Load training and test data.
def ld_mnist():
"""Load training and test data."""
def convert_types(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
dataset, info = tfds.load('mnist', data_dir='gs://tfds-data/datasets', with_info=True,
as_... |
MNIST CleverHans tutorial
:param train_start: index of first training set example
:param train_end: index of last training set example
:param test_start: index of first test set example
:param test_end: index of last test set example
:param nb_epochs: number of epochs to train model
:param batch_size: size ... |
Validate all submissions and copy them into place
def main(args):
"""Validate all submissions and copy them into place"""
random.seed()
temp_dir = tempfile.mkdtemp()
logging.info('Created temporary directory: %s', temp_dir)
validator = SubmissionValidator(
source_dir=args.source_dir,
target_dir=a... |
Common method to update submission statistics.
def _update_stat(self, submission_type, increase_success, increase_fail):
"""Common method to update submission statistics."""
stat = self.stats.get(submission_type, (0, 0))
stat = (stat[0] + increase_success, stat[1] + increase_fail)
self.stats[submission... |
Print statistics into log.
def log_stats(self):
"""Print statistics into log."""
logging.info('Validation statistics: ')
for k, v in iteritems(self.stats):
logging.info('%s - %d valid out of %d total submissions',
k, v[0], v[0] + v[1]) |
Copies submission from Google Cloud Storage to local directory.
Args:
cloud_path: path of the submission in Google Cloud Storage
Returns:
name of the local file where submission is copied to
def copy_submission_locally(self, cloud_path):
"""Copies submission from Google Cloud Storage to local... |
Copies submission to target directory.
Args:
src_filename: source filename of the submission
dst_subdir: subdirectory of the target directory where submission should
be copied to
submission_id: ID of the submission, will be used as a new
submission filename (before extension)
def... |
Validates one submission and copies it to target directory.
Args:
submission_path: path in Google Cloud Storage of the submission file
def validate_and_copy_one_submission(self, submission_path):
"""Validates one submission and copies it to target directory.
Args:
submission_path: path in Goo... |
Saves mapping from submission IDs to original filenames.
This mapping is saved as CSV file into target directory.
def save_id_to_path_mapping(self):
"""Saves mapping from submission IDs to original filenames.
This mapping is saved as CSV file into target directory.
"""
if not self.id_to_path_mapp... |
Runs validation of all submissions.
def run(self):
"""Runs validation of all submissions."""
cmd = ['gsutil', 'ls', os.path.join(self.source_dir, '**')]
try:
files_list = subprocess.check_output(cmd).split('\n')
except subprocess.CalledProcessError:
logging.error('Can''t read source directo... |
Takes the path to a directory with reports and renders success fail plots.
def main(argv=None):
"""Takes the path to a directory with reports and renders success fail plots."""
report_paths = argv[1:]
fail_names = FLAGS.fail_names.split(',')
for report_path in report_paths:
plot_report_from_path(report_p... |
Returns True if work piece is unclaimed.
def is_unclaimed(work):
"""Returns True if work piece is unclaimed."""
if work['is_completed']:
return False
cutoff_time = time.time() - MAX_PROCESSING_TIME
if (work['claimed_worker_id'] and
work['claimed_worker_start_time'] is not None
and work['claimed... |
Writes all work pieces into datastore.
Each work piece is identified by ID. This method writes/updates only those
work pieces which IDs are stored in this class. For examples, if this class
has only work pieces with IDs '1' ... '100' and datastore already contains
work pieces with IDs '50' ... '200' t... |
Reads all work pieces from the datastore.
def read_all_from_datastore(self):
"""Reads all work pieces from the datastore."""
self._work = {}
client = self._datastore_client
parent_key = client.key(KIND_WORK_TYPE, self._work_type_entity_id)
for entity in client.query_fetch(kind=KIND_WORK, ancestor=p... |
Reads undone worke pieces which are assigned to shard with given id.
def _read_undone_shard_from_datastore(self, shard_id=None):
"""Reads undone worke pieces which are assigned to shard with given id."""
self._work = {}
client = self._datastore_client
parent_key = client.key(KIND_WORK_TYPE, self._work_... |
Reads undone work from the datastore.
If shard_id and num_shards are specified then this method will attempt
to read undone work for shard with id shard_id. If no undone work was found
then it will try to read shard (shard_id+1) and so on until either found
shard with undone work or all shards are read... |
Tries pick next unclaimed piece of work to do.
Attempt to claim work piece is done using Cloud Datastore transaction, so
only one worker can claim any work piece at a time.
Args:
worker_id: ID of current worker
submission_id: if not None then this method will try to pick
piece of work ... |
Updates work piece in datastore as completed.
Args:
worker_id: ID of the worker which did the work
work_id: ID of the work which was done
other_values: dictionary with additonal values which should be saved
with the work piece
error: if not None then error occurred during computatio... |
Computes statistics from all work pieces stored in this class.
def compute_work_statistics(self):
"""Computes statistics from all work pieces stored in this class."""
result = {}
for v in itervalues(self.work):
submission_id = v['submission_id']
if submission_id not in result:
result[su... |
Initializes work pieces from adversarial batches.
Args:
adv_batches: dict with adversarial batches,
could be obtained as AversarialBatches.data
def init_from_adversarial_batches(self, adv_batches):
"""Initializes work pieces from adversarial batches.
Args:
adv_batches: dict with adver... |
Initializes work pieces from classification batches.
Args:
class_batches: dict with classification batches, could be obtained
as ClassificationBatches.data
num_shards: number of shards to split data into,
if None then no sharding is done.
def init_from_class_batches(self, class_batches... |
Make a confidence report and save it to disk.
def main(argv=None):
"""
Make a confidence report and save it to disk.
"""
assert len(argv) >= 3
_name_of_script = argv[0]
model_filepath = argv[1]
adv_x_filepaths = argv[2:]
sess = tf.Session()
with sess.as_default():
model = serial.load(model_filep... |
TensorFlow implementation of the Fast Gradient Method.
:param x: the input placeholder
:param logits: output of model.get_logits
:param y: (optional) A placeholder for the true labels. If targeted
is true, then provide the target label. Otherwise, only provide
this parameter if you'd like ... |
Solves for the optimal input to a linear function under a norm constraint.
Optimal_perturbation = argmax_{eta, ||eta||_{ord} < eps} dot(eta, grad)
:param grad: tf tensor containing a batch of gradients
:param eps: float scalar specifying size of constraint region
:param ord: int specifying order of norm
:re... |
Returns the graph for Fast Gradient Method adversarial examples.
:param x: The model's symbolic inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Returns the graph for Fast Gradient Method adversarial examples.
:param x: The model's symbolic inputs.
:param kwargs:... |
Function to read the weights from checkpoint based on json description.
Args:
checkpoint: tensorflow checkpoint with trained model to
verify
model_json: path of json file with model description of
the network list of dictionary items for each layer
containing 'type', 'weight_var... |
Performs forward pass through the layer weights at layer_index.
Args:
vector: vector that has to be passed through in forward pass
layer_index: index of the layer
is_transpose: whether the weights of the layer have to be transposed
is_abs: whether to take the absolute value of the weights
... |
Returns a hexdigest of all the python files in the module.
def dev_version():
"""
Returns a hexdigest of all the python files in the module.
"""
md5_hash = hashlib.md5()
py_files = sorted(list_files(suffix=".py"))
if not py_files:
return ''
for filename in py_files:
with open(filename, 'rb') as ... |
The current implementation of report printing.
:param report: ConfidenceReport
def current(report):
"""
The current implementation of report printing.
:param report: ConfidenceReport
"""
if hasattr(report, "completed"):
if report.completed:
print("Report completed")
else:
print("REPORT ... |
The deprecated implementation of report printing.
:param report: dict
def deprecated(report):
"""
The deprecated implementation of report printing.
:param report: dict
"""
warnings.warn("Printing dict-based reports is deprecated. This function "
"is included only to support a private develo... |
A simple model trained to minimize Cross Entropy and Maximize Soft Nearest
Neighbor Loss at each internal layer. This outputs a TSNE of the sign of
the adversarial gradients of a trained model. A model with a negative
SNNL_factor will show little or no class clusters, while a model with a
0 SNNL_factor will hav... |
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `pa... |
Take in a dictionary of parameters and applies attack-specific checks
before saving them as attributes.
Attack-specific parameters:
:param eps: (optional float) maximum distortion of adversarial example
compared to original input
:param eps_iter: (optional float) step size for each att... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.