text stringlengths 81 112k |
|---|
Computes matrices T and V using the Lanczos algorithm.
Args:
k: number of iterations and dimensionality of the tridiagonal matrix
Returns:
eig_vec: eigen vector corresponding to min eigenvalue
def construct_lanczos_params(self):
"""Computes matrices T and V using the Lanczos algorithm.
Ar... |
Function that constructs minimization objective from dual variables.
def set_differentiable_objective(self):
"""Function that constructs minimization objective from dual variables."""
# Checking if graphs are already created
if self.vector_g is not None:
return
# Computing the scalar term
bi... |
Function that provides matrix product interface with PSD matrix.
Args:
vector: the vector to be multiplied with matrix H
Returns:
result_product: Matrix product of H and vector
def get_h_product(self, vector, dtype=None):
"""Function that provides matrix product interface with PSD matrix.
... |
Function that provides matrix product interface with PSD matrix.
Args:
vector: the vector to be multiplied with matrix M
Returns:
result_product: Matrix product of M and vector
def get_psd_product(self, vector, dtype=None):
"""Function that provides matrix product interface with PSD matrix.
... |
Function that returns the tf graph corresponding to the entire matrix M.
Returns:
matrix_h: unrolled version of tf matrix corresponding to H
matrix_m: unrolled tf matrix corresponding to M
def get_full_psd_matrix(self):
"""Function that returns the tf graph corresponding to the entire matrix M.
... |
Run binary search to find a value for nu that makes M PSD
Args:
original_nu: starting value of nu to do binary search on
feed_dictionary: dictionary of updated lambda variables to feed into M
Returns:
new_nu: new value of nu
def make_m_psd(self, original_nu, feed_dictionary):
"""Run binar... |
Computes the min eigen value and corresponding vector of matrix M or H
using the Lanczos algorithm.
Args:
compute_m: boolean to determine whether we should compute eig val/vec
for M or for H. True for M; False for H.
feed_dict: dictionary mapping from TF placeholders to values (optional)
... |
Function to compute the certificate based either current value
or dual variables loaded from dual folder
def compute_certificate(self, current_step, feed_dictionary):
""" Function to compute the certificate based either current value
or dual variables loaded from dual folder """
feed_dict = feed_dictio... |
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `pars... |
Take in a dictionary of parameters and applies attack-specific checks
before saving them as attributes.
:param n_samples: (optional) The number of transformations sampled to
construct the attack. Set it to None to run
full grid attack.
:param dx_min: (optional flo... |
Defines the right convolutional layer according to the
version of Keras that is installed.
:param filters: (required integer) the dimensionality of the output
space (i.e. the number output of filters in the
convolution)
:param kernel_shape: (required tuple or list of 2 integers... |
Defines a CNN model using Keras sequential model
:param logits: If set to False, returns a Keras model, otherwise will also
return logits tensor
:param input_ph: The TensorFlow tensor for the input
(needed if returning logits)
("ph" stands for placeholder but it... |
Looks for the name of the softmax layer.
:return: Softmax layer name
def _get_softmax_name(self):
"""
Looks for the name of the softmax layer.
:return: Softmax layer name
"""
for layer in self.model.layers:
cfg = layer.get_config()
if 'activation' in cfg and cfg['activation'] == 'so... |
Looks for the name of abstracted layer.
Usually these layers appears when model is stacked.
:return: List of abstracted layers
def _get_abstract_layer_name(self):
"""
Looks for the name of abstracted layer.
Usually these layers appears when model is stacked.
:return: List of abstracted layers
... |
Looks for the name of the layer producing the logits.
:return: name of layer producing the logits
def _get_logits_name(self):
"""
Looks for the name of the layer producing the logits.
:return: name of layer producing the logits
"""
softmax_name = self._get_softmax_name()
softmax_layer = sel... |
:param x: A symbolic representation of the network input.
:return: A symbolic representation of the logits
def get_logits(self, x):
"""
:param x: A symbolic representation of the network input.
:return: A symbolic representation of the logits
"""
logits_name = self._get_logits_name()
logits... |
:param x: A symbolic representation of the network input.
:return: A symbolic representation of the probs
def get_probs(self, x):
"""
:param x: A symbolic representation of the network input.
:return: A symbolic representation of the probs
"""
name = self._get_softmax_name()
return self.ge... |
:return: Names of all the layers kept by Keras
def get_layer_names(self):
"""
:return: Names of all the layers kept by Keras
"""
layer_names = [x.name for x in self.model.layers]
return layer_names |
Exposes all the layers of the model returned by get_layer_names.
:param x: A symbolic representation of the network input
:return: A dictionary mapping layer names to the symbolic
representation of their output.
def fprop(self, x):
"""
Exposes all the layers of the model returned by get_la... |
Expose the hidden features of a model given a layer name.
:param x: A symbolic representation of the network input
:param layer: The name of the hidden layer to return features at.
:return: A symbolic representation of the hidden features
:raise: NoSuchLayerError if `layer` is not in the model.
def get... |
Returns extraction command based on the filename extension.
def get_extract_command_template(filename):
"""Returns extraction command based on the filename extension."""
for k, v in iteritems(EXTRACT_COMMAND):
if filename.endswith(k):
return v
return None |
Calls shell command with parameter substitution.
Args:
command: command to run as a list of tokens
**kwargs: dirctionary with substitutions
Returns:
whether command was successful, i.e. returned 0 status code
Example of usage:
shell_call(['cp', '${A}', '${B}'], A='src_file', B='dst_file')
wil... |
Makes directory readable and writable by everybody.
Args:
dirname: name of the directory
Returns:
True if operation was successfull
If you run something inside Docker container and it writes files, then
these files will be written as root user with restricted permissions.
So to be able to read/modi... |
Cleans up and prepare temporary directory.
def _prepare_temp_dir(self):
"""Cleans up and prepare temporary directory."""
if not shell_call(['sudo', 'rm', '-rf', os.path.join(self._temp_dir, '*')]):
logging.error('Failed to cleanup temporary directory.')
sys.exit(1)
# NOTE: we do not create self... |
Extracts submission and moves it into self._extracted_submission_dir.
def _extract_submission(self, filename):
"""Extracts submission and moves it into self._extracted_submission_dir."""
# verify filesize
file_size = os.path.getsize(filename)
if file_size > MAX_SUBMISSION_SIZE_ZIPPED:
logging.err... |
Verifies size of Docker image.
Args:
image_name: name of the Docker image.
Returns:
True if image size is within the limits, False otherwise.
def _verify_docker_image_size(self, image_name):
"""Verifies size of Docker image.
Args:
image_name: name of the Docker image.
Returns:... |
Prepares sample data for the submission.
Args:
submission_type: type of the submission.
def _prepare_sample_data(self, submission_type):
"""Prepares sample data for the submission.
Args:
submission_type: type of the submission.
"""
# write images
images = np.random.randint(0, 256,... |
Verifies correctness of the submission output.
Args:
submission_type: type of the submission
Returns:
True if output looks valid
def _verify_output(self, submission_type):
"""Verifies correctness of the submission output.
Args:
submission_type: type of the submission
Returns:
... |
Validates submission.
Args:
filename: submission filename
Returns:
submission metadata or None if submission is invalid
def validate_submission(self, filename):
"""Validates submission.
Args:
filename: submission filename
Returns:
submission metadata or None if submissio... |
Save loss in json format
def save(self, path):
"""Save loss in json format
"""
json.dump(dict(loss=self.__class__.__name__,
params=self.hparams),
open(os.path.join(path, 'loss.json'), 'wb')) |
Pairwise Euclidean distance between two matrices.
:param A: a matrix.
:param B: a matrix.
:returns: A tensor for the pairwise Euclidean between A and B.
def pairwise_euclid_distance(A, B):
"""Pairwise Euclidean distance between two matrices.
:param A: a matrix.
:param B: a matrix.
:return... |
Pairwise cosine distance between two matrices.
:param A: a matrix.
:param B: a matrix.
:returns: A tensor for the pairwise cosine between A and B.
def pairwise_cos_distance(A, B):
"""Pairwise cosine distance between two matrices.
:param A: a matrix.
:param B: a matrix.
:returns: A tensor ... |
Exponentiated pairwise distance between each element of A and
all those of B.
:param A: a matrix.
:param B: a matrix.
:param temp: Temperature
:cos_distance: Boolean for using cosine or Euclidean distance.
:returns: A tensor for the exponentiated pairwise distance between
each element and A... |
Row normalized exponentiated pairwise distance between all the elements
of x. Conceptualized as the probability of sampling a neighbor point for
every element of x, proportional to the distance between the points.
:param x: a matrix
:param temp: Temperature
:cos_distance: Boolean for using cosine or... |
Masking matrix such that element i,j is 1 iff y[i] == y2[i].
:param y: a list of labels
:param y2: a list of labels
:returns: A tensor for the masking matrix.
def same_label_mask(y, y2):
"""Masking matrix such that element i,j is 1 iff y[i] == y2[i].
:param y: a list of labels
:param y2: a lis... |
The pairwise sampling probabilities for the elements of x for neighbor
points which share labels.
:param x: a matrix
:param y: a list of labels for each element of x
:param temp: Temperature
:cos_distance: Boolean for using cosine or Euclidean distance
:returns: A tensor for the pairwise sampli... |
Soft Nearest Neighbor Loss
:param x: a matrix.
:param y: a list of labels for each element of x.
:param temp: Temperature.
:cos_distance: Boolean for using cosine or Euclidean distance.
:returns: A tensor for the Soft Nearest Neighbor Loss of the points
in x with labels y.
def SNNL(x... |
The optimized variant of Soft Nearest Neighbor Loss. Every time this
tensor is evaluated, the temperature is optimized to minimize the loss
value, this results in more numerically stable calculations of the SNNL.
:param x: a matrix.
:param y: a list of labels for each element of x.
:param initial_te... |
Display an image.
:param ndarray: The image as an ndarray
:param min_val: The minimum pixel value in the image format
:param max_val: The maximum pixel valie in the image format
If min_val and max_val are not specified, attempts to
infer whether the image is in any of the common ranges:
[0, 1], [-1,... |
Save an image, represented as an ndarray, to the filesystem
:param path: string, filepath
:param ndarray: The image as an ndarray
:param min_val: The minimum pixel value in the image format
:param max_val: The maximum pixel valie in the image format
If min_val and max_val are not specified, attempts to
... |
Converts an ndarray to a PIL image.
:param ndarray: The numpy ndarray to convert
:param min_val: The minimum pixel value in the image format
:param max_val: The maximum pixel valie in the image format
If min_val and max_val are not specified, attempts to
infer whether the image is in any of the common ran... |
Turns a batch of images into one big image.
:param image_batch: ndarray, shape (batch_size, rows, cols, channels)
:returns : a big image containing all `batch_size` images in a grid
def make_grid(image_batch):
"""
Turns a batch of images into one big image.
:param image_batch: ndarray, shape (batch_size, row... |
Generate adversarial examples and return them as a NumPy array.
:param x_val: A NumPy array with the original inputs.
:param **kwargs: optional parameters used by child classes.
:return: A NumPy array holding the adversarial examples.
def generate_np(self, x_val, **kwargs):
"""
Generate adversaria... |
Generates the adversarial sample for the given input.
:param x: The model's inputs.
:param eps: (optional float) attack step size (input variation)
:param ord: (optional) Order of the norm (mimics NumPy).
Possible values: np.inf, 1 or 2.
:param y: (optional) A tf variable` with the model... |
TensorFlow Eager implementation of the Fast Gradient Method.
:param x: the input variable
:param targeted: Is the attack targeted or untargeted? Untargeted, the
default, will try to make the label incorrect.
Targeted will instead try to move in the direction
... |
Returns random data to be used with `feed_dict`.
:param rng: A numpy.random.RandomState instance
:param placeholders: List of tensorflow placeholders
:return: A dict mapping placeholders to random numpy values
def random_feed_dict(rng, placeholders):
"""
Returns random data to be used with `feed_dict`.
:pa... |
Returns a list of all files in CleverHans with the given suffix.
Parameters
----------
suffix : str
Returns
-------
file_list : list
A list of all files in CleverHans whose filepath ends with `suffix`.
def list_files(suffix=""):
"""
Returns a list of all files in CleverHans with the given suff... |
Returns a list of all files ending in `suffix` contained within `path`.
Parameters
----------
path : str
a filepath
suffix : str
Returns
-------
l : list
A list of all files ending in `suffix` contained within `path`.
(If `path` is a file rather than a directory, it is considered
... |
Prints header with given text and frame composed of '#' characters.
def print_header(text):
"""Prints header with given text and frame composed of '#' characters."""
print()
print('#'*(len(text)+4))
print('# ' + text + ' #')
print('#'*(len(text)+4))
print() |
Saves dictionary as CSV file.
def save_dict_to_file(filename, dictionary):
"""Saves dictionary as CSV file."""
with open(filename, 'w') as f:
writer = csv.writer(f)
for k, v in iteritems(dictionary):
writer.writerow([str(k), str(v)]) |
Main function which runs master.
def main(args):
"""Main function which runs master."""
if args.blacklisted_submissions:
logging.warning('BLACKLISTED SUBMISSIONS: %s',
args.blacklisted_submissions)
if args.limited_dataset:
logging.info('Using limited dataset: 3 batches * 10 images')
... |
When work is already populated asks whether we should continue.
This method prints warning message that work is populated and asks
whether user wants to continue or not.
Args:
work: instance of WorkPiecesBase
Returns:
True if we should continue and populate datastore, False if we should s... |
Prepares all data needed for evaluation of attacks.
def prepare_attacks(self):
"""Prepares all data needed for evaluation of attacks."""
print_header('PREPARING ATTACKS DATA')
# verify that attacks data not written yet
if not self.ask_when_work_is_populated(self.attack_work):
return
self.atta... |
Prepares all data needed for evaluation of defenses.
def prepare_defenses(self):
"""Prepares all data needed for evaluation of defenses."""
print_header('PREPARING DEFENSE DATA')
# verify that defense data not written yet
if not self.ask_when_work_is_populated(self.defense_work):
return
self.... |
Saves statistics about each submission.
Saved statistics include score; number of completed and failed batches;
min, max, average and median time needed to run one batch.
Args:
run_stats: dictionary with runtime statistics for submissions,
can be generated by WorkPiecesBase.compute_work_stat... |
Saves sorted (by score) results of the evaluation.
Args:
run_stats: dictionary with runtime statistics for submissions,
can be generated by WorkPiecesBase.compute_work_statistics
scores: dictionary mapping submission ids to scores
image_count: dictionary with number of images processed by... |
Reads dataset metadata.
Returns:
instance of DatasetMetadata
def _read_dataset_metadata(self):
"""Reads dataset metadata.
Returns:
instance of DatasetMetadata
"""
blob = self.storage_client.get_blob(
'dataset/' + self.dataset_name + '_dataset.csv')
buf = BytesIO()
blob... |
Computes results (scores, stats, etc...) of competition evaluation.
Results are saved into output directory (self.results_dir).
Also this method saves all intermediate data into output directory as well,
so it can resume computation if it was interrupted for some reason.
This is useful because computat... |
Shows status for given work pieces.
Args:
work: instance of either AttackWorkPieces or DefenseWorkPieces
def _show_status_for_work(self, work):
"""Shows status for given work pieces.
Args:
work: instance of either AttackWorkPieces or DefenseWorkPieces
"""
work_count = len(work.work)
... |
Saves errors for given work pieces into file.
Args:
work: instance of either AttackWorkPieces or DefenseWorkPieces
output_file: name of the output file
def _export_work_errors(self, work, output_file):
"""Saves errors for given work pieces into file.
Args:
work: instance of either Attac... |
Shows current status of competition evaluation.
Also this method saves error messages generated by attacks and defenses
into attack_errors.txt and defense_errors.txt.
def show_status(self):
"""Shows current status of competition evaluation.
Also this method saves error messages generated by attacks a... |
Cleans up data of failed attacks.
def cleanup_failed_attacks(self):
"""Cleans up data of failed attacks."""
print_header('Cleaning up failed attacks')
attacks_to_replace = {}
self.attack_work.read_all_from_datastore()
failed_submissions = set()
error_msg = set()
for k, v in iteritems(self.a... |
Cleans up data about attacks which generated zero images.
def cleanup_attacks_with_zero_images(self):
"""Cleans up data about attacks which generated zero images."""
print_header('Cleaning up attacks which generated 0 images.')
# find out attack work to cleanup
self.adv_batches.init_from_datastore()
... |
Asks confirmation and then deletes entries with keys.
Args:
keys_to_delete: list of datastore keys for which entries should be deleted
def _cleanup_keys_with_confirmation(self, keys_to_delete):
"""Asks confirmation and then deletes entries with keys.
Args:
keys_to_delete: list of datastore ke... |
Cleans up all data about defense work in current round.
def cleanup_defenses(self):
"""Cleans up all data about defense work in current round."""
print_header('CLEANING UP DEFENSES DATA')
work_ancestor_key = self.datastore_client.key('WorkType', 'AllDefenses')
keys_to_delete = [
e.key
f... |
Cleans up datastore and deletes all information about current round.
def cleanup_datastore(self):
"""Cleans up datastore and deletes all information about current round."""
print_header('CLEANING UP ENTIRE DATASTORE')
kinds_to_delete = [u'Submission', u'SubmissionType',
u'DatasetImag... |
Run the sample attack
def main(_):
"""Run the sample attack"""
eps = FLAGS.max_epsilon / 255.0
batch_shape = [FLAGS.batch_size, FLAGS.image_height, FLAGS.image_width, 3]
with tf.Graph().as_default():
x_input = tf.placeholder(tf.float32, shape=batch_shape)
noisy_images = x_input + eps * tf.sign(tf.rand... |
TensorFlow implementation of the JSMA (see https://arxiv.org/abs/1511.07528
for details about the algorithm design choices).
:param x: the input placeholder
:param y_target: the target tensor
:param model: a cleverhans.model.Model object.
:param theta: delta for each feature adjustment
:param gamma: a floa... |
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `pa... |
Take in a dictionary of parameters and applies attack-specific checks
before saving them as attributes.
Attack-specific parameters:
:param theta: (optional float) Perturbation introduced to modified
components (can be positive or negative)
:param gamma: (optional float) Maximum perce... |
Create a multi-GPU model similar to the basic cnn in the tutorials.
def make_basic_ngpu(nb_classes=10, input_shape=(None, 28, 28, 1), **kwargs):
"""
Create a multi-GPU model similar to the basic cnn in the tutorials.
"""
model = make_basic_cnn()
layers = model.layers
model = MLPnGPU(nb_classes, layers, in... |
Create a multi-GPU model similar to Madry et al. (arXiv:1706.06083).
def make_madry_ngpu(nb_classes=10, input_shape=(None, 28, 28, 1), **kwargs):
"""
Create a multi-GPU model similar to Madry et al. (arXiv:1706.06083).
"""
layers = [Conv2DnGPU(32, (5, 5), (1, 1), "SAME"),
ReLU(),
MaxPoo... |
Build the core model within the graph.
def _build_model(self, x):
"""Build the core model within the graph."""
with tf.variable_scope('init'):
x = self._conv('init_conv', x, 3, x.shape[3], 16,
self._stride_arr(1))
strides = [1, 2, 2]
activate_before_residual = [True, False, ... |
Build the graph for cost from the logits if logits are provided.
If predictions are provided, logits are extracted from the operation.
def build_cost(self, labels, logits):
"""
Build the graph for cost from the logits if logits are provided.
If predictions are provided, logits are extracted from the op... |
Build training specific ops for the graph.
def build_train_op_from_cost(self, cost):
"""Build training specific ops for the graph."""
self.lrn_rate = tf.constant(self.hps.lrn_rate, tf.float32,
name='learning_rate')
self.momentum = tf.constant(self.hps.momentum, tf.float32,
... |
Layer normalization.
def _layer_norm(self, name, x):
"""Layer normalization."""
if self.init_layers:
bn = LayerNorm()
bn.name = name
self.layers += [bn]
else:
bn = self.layers[self.layer_idx]
self.layer_idx += 1
bn.device_name = self.device_name
bn.set_training(self.tr... |
Residual unit with 2 sub layers.
def _residual(self, x, in_filter, out_filter, stride,
activate_before_residual=False):
"""Residual unit with 2 sub layers."""
if activate_before_residual:
with tf.variable_scope('shared_activation'):
x = self._layer_norm('init_bn', x)
x = s... |
Bottleneck residual unit with 3 sub layers.
def _bottleneck_residual(self, x, in_filter, out_filter, stride,
activate_before_residual=False):
"""Bottleneck residual unit with 3 sub layers."""
if activate_before_residual:
with tf.variable_scope('common_bn_relu'):
x = sel... |
L2 weight decay loss.
def _decay(self):
"""L2 weight decay loss."""
if self.decay_cost is not None:
return self.decay_cost
costs = []
if self.device_name is None:
for var in tf.trainable_variables():
if var.op.name.find(r'DW') > 0:
costs.append(tf.nn.l2_loss(var))
els... |
Convolution.
def _conv(self, name, x, filter_size, in_filters, out_filters, strides):
"""Convolution."""
if self.init_layers:
conv = Conv2DnGPU(out_filters,
(filter_size, filter_size),
strides[1:3], 'SAME', w_name='DW')
conv.name = name
self.lay... |
FullyConnected layer for final output.
def _fully_connected(self, x, out_dim):
"""FullyConnected layer for final output."""
if self.init_layers:
fc = LinearnGPU(out_dim, w_name='DW')
fc.name = 'logits'
self.layers += [fc]
else:
fc = self.layers[self.layer_idx]
self.layer_idx +... |
Reads classification results from the file in Cloud Storage.
This method reads file with classification results produced by running
defense on singe batch of adversarial images.
Args:
storage_client: instance of CompetitionStorageClient or None for local file
file_path: path of the file with results
... |
Reads and analyzes one classification result.
This method reads file with classification result and counts
how many images were classified correctly and incorrectly,
how many times target class was hit and total number of images.
Args:
storage_client: instance of CompetitionStorageClient
file_path: re... |
Saves matrix to the file.
Args:
filename: name of the file where to save matrix
remap_dim0: dictionary with mapping row indices to row names which should
be saved to file. If none then indices will be used as names.
remap_dim1: dictionary with mapping column indices to column names which
... |
Populates data from adversarial batches and writes to datastore.
Args:
submissions: instance of CompetitionSubmissions
adv_batches: instance of AversarialBatches
def init_from_adversarial_batches_write_to_datastore(self, submissions,
adv_batches):... |
Initializes data by reading it from the datastore.
def init_from_datastore(self):
"""Initializes data by reading it from the datastore."""
self._data = {}
client = self._datastore_client
for entity in client.query_fetch(kind=KIND_CLASSIFICATION_BATCH):
class_batch_id = entity.key.flat_path[-1]
... |
Reads and returns single batch from the datastore.
def read_batch_from_datastore(self, class_batch_id):
"""Reads and returns single batch from the datastore."""
client = self._datastore_client
key = client.key(KIND_CLASSIFICATION_BATCH, class_batch_id)
result = client.get(key)
if result is not None... |
Computes classification results.
Args:
adv_batches: instance of AversarialBatches
dataset_batches: instance of DatasetBatches
dataset_meta: instance of DatasetMetadata
defense_work: instance of DefenseWorkPieces
Returns:
accuracy_matrix, error_matrix, hit_target_class_matrix,
... |
Parses type of participant based on submission filename.
Args:
submission_path: path to the submission in Google Cloud Storage
Returns:
dict with one element. Element key correspond to type of participant
(team, baseline), element value is ID of the participant.
Raises:
ValueError: is participa... |
Loads list of submissions from the directory.
Args:
dir_suffix: suffix of the directory where submissions are stored,
one of the folowing constants: ATTACK_SUBDIR, TARGETED_ATTACK_SUBDIR
or DEFENSE_SUBDIR.
id_pattern: pattern which is used to generate (internal) IDs
for submissi... |
Init list of sumibssions from Storage and saves them to Datastore.
Should be called only once (typically by master) during evaluation of
the competition.
def init_from_storage_write_to_datastore(self):
"""Init list of sumibssions from Storage and saves them to Datastore.
Should be called only once (t... |
Writes all submissions to datastore.
def _write_to_datastore(self):
"""Writes all submissions to datastore."""
# Populate datastore
roots_and_submissions = zip([ATTACKS_ENTITY_KEY,
TARGET_ATTACKS_ENTITY_KEY,
DEFENSES_ENTITY_KEY],
... |
Init list of submission from Datastore.
Should be called by each worker during initialization.
def init_from_datastore(self):
"""Init list of submission from Datastore.
Should be called by each worker during initialization.
"""
self._attacks = {}
self._targeted_attacks = {}
self._defenses... |
Returns IDs of all attacks (targeted and non-targeted).
def get_all_attack_ids(self):
"""Returns IDs of all attacks (targeted and non-targeted)."""
return list(self.attacks.keys()) + list(self.targeted_attacks.keys()) |
Finds submission by ID.
Args:
submission_id: ID of the submission
Returns:
SubmissionDescriptor with information about submission or None if
submission is not found.
def find_by_id(self, submission_id):
"""Finds submission by ID.
Args:
submission_id: ID of the submission
... |
Returns human readable submission external ID.
Args:
submission_id: internal submission ID.
Returns:
human readable ID.
def get_external_id(self, submission_id):
"""Returns human readable submission external ID.
Args:
submission_id: internal submission ID.
Returns:
human... |
Cleans up and prepare temporary directory.
def _prepare_temp_dir(self):
"""Cleans up and prepare temporary directory."""
shell_call(['rm', '-rf', os.path.join(self._temp_dir, '*')])
# NOTE: we do not create self._extracted_submission_dir
# this is intentional because self._tmp_extracted_dir or it's sub... |
Loads and verifies metadata.
Args:
submission_type: type of the submission
Returns:
dictionaty with metadata or None if metadata not found or invalid
def _load_and_verify_metadata(self, submission_type):
"""Loads and verifies metadata.
Args:
submission_type: type of the submission
... |
Runs submission inside Docker container.
Args:
metadata: dictionary with submission metadata
Returns:
True if status code of Docker command was success (i.e. zero),
False otherwise.
def _run_submission(self, metadata):
"""Runs submission inside Docker container.
Args:
metadat... |
Tensorflow 2.0 implementation of the Fast Gradient Method.
:param model_fn: a callable that takes an input tensor and returns the model logits.
:param x: input tensor.
:param eps: epsilon (input variation parameter); see https://arxiv.org/abs/1412.6572.
:param ord: Order of the norm (mimics NumPy). Possible val... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.