text
stringlengths
81
112k
Method which downloads submission to local directory. def download(self): """Method which downloads submission to local directory.""" # Structure of the download directory: # submission_dir=LOCAL_SUBMISSIONS_DIR/submission_id # submission_dir/s.ext <-- archived submission # submission_dir/extract...
Creates a temporary copy of extracted submission. When executed, submission is allowed to modify it's own directory. So to ensure that submission does not pass any data between runs, new copy of the submission is made before each run. After a run temporary copy of submission is deleted. Returns: ...
Runs docker command without time limit. Args: cmd: list with the command line arguments which are passed to docker binary Returns: how long it took to run submission in seconds Raises: WorkerError: if error occurred during execution of the submission def run_without_time_limit(...
Runs docker command and enforces time limit. Args: cmd: list with the command line arguments which are passed to docker binary after run time_limit: time limit, in seconds. Negative value means no limit. Returns: how long it took to run submission in seconds Raises: Worker...
Runs attack inside Docker. Args: input_dir: directory with input (dataset). output_dir: directory where output (adversarial images) should be written. epsilon: maximum allowed size of adversarial perturbation, should be in range [0, 255]. Returns: how long it took to run submis...
Runs defense inside Docker. Args: input_dir: directory with input (adversarial images). output_file_path: path of the output file. Returns: how long it took to run submission in seconds def run(self, input_dir, output_file_path): """Runs defense inside Docker. Args: input_dir...
Read `dataset_meta` field from bucket def read_dataset_metadata(self): """Read `dataset_meta` field from bucket""" if self.dataset_meta: return shell_call(['gsutil', 'cp', 'gs://' + self.storage_client.bucket_name + '/' + 'dataset/' + self.dataset_name + '_dataset.csv'...
Initializes data necessary to execute attacks. This method could be called multiple times, only first call does initialization, subsequent calls are noop. def fetch_attacks_data(self): """Initializes data necessary to execute attacks. This method could be called multiple times, only first call does ...
Runs one attack work. Args: work_id: ID of the piece of work to run Returns: elapsed_time_sec, submission_id - elapsed time and id of the submission Raises: WorkerError: if error occurred during execution. def run_attack_work(self, work_id): """Runs one attack work. Args: ...
Method which evaluates all attack work. In a loop this method queries not completed attack work, picks one attack work and runs it. def run_attacks(self): """Method which evaluates all attack work. In a loop this method queries not completed attack work, picks one attack work and runs it. """...
Lazy initialization of data necessary to execute defenses. def fetch_defense_data(self): """Lazy initialization of data necessary to execute defenses.""" if self.defenses_data_initialized: return logging.info('Fetching defense data from datastore') # init data from datastore self.submissions....
Runs one defense work. Args: work_id: ID of the piece of work to run Returns: elapsed_time_sec, submission_id - elapsed time and id of the submission Raises: WorkerError: if error occurred during execution. def run_defense_work(self, work_id): """Runs one defense work. Args: ...
Method which evaluates all defense work. In a loop this method queries not completed defense work, picks one defense work and runs it. def run_defenses(self): """Method which evaluates all defense work. In a loop this method queries not completed defense work, picks one defense work and runs it. ...
Run attacks and defenses def run_work(self): """Run attacks and defenses""" if os.path.exists(LOCAL_EVAL_ROOT_DIR): sudo_remove_dirtree(LOCAL_EVAL_ROOT_DIR) self.run_attacks() self.run_defenses()
Returns a hashable summary of the types of arg_names within kwargs. :param arg_names: tuple containing names of relevant arguments :param kwargs: dict mapping string argument names to values. These must be values for which we can create a tf placeholder. Currently supported: numpy darray or something that c...
Construct the graph required to run the attack through generate_np. :param fixed: Structural elements that require defining a new graph. :param feedable: Arguments that can be fed to the same graph when they take different values. :param x_val: symbolic adversarial example :param h...
Generate adversarial examples and return them as a NumPy array. Sub-classes *should not* implement this method unless they must perform special handling of arguments. :param x_val: A NumPy array with the original inputs. :param **kwargs: optional parameters used by child classes. :return: A NumPy a...
Construct the inputs to the attack graph to be used by generate_np. :param kwargs: Keyword arguments to generate_np. :return: Structural arguments Feedable arguments Output of `arg_type` describing feedable arguments A unique key def construct_variables(self, kwargs): """ Const...
Get the label to use in generating an adversarial example for x. The kwargs are fed directly from the kwargs of the attack. If 'y' is in kwargs, then assume it's an untargeted attack and use that as the label. If 'y_target' is in kwargs and is not none, then assume it's a targeted attack and use tha...
As described in https://arxiv.org/abs/1511.06581 def dueling_model(img_in, num_actions, scope, noisy=False, reuse=False, concat_softmax=False): """As described in https://arxiv.org/abs/1511.06581""" with tf.variable_scope(scope, reuse=reuse): out = img_in with tf.variable_scope("convnet")...
MNIST tutorial for the Jacobian-based saliency map approach (JSMA) :param train_start: index of first training set example :param train_end: index of last training set example :param test_start: index of first test set example :param test_end: index of last test set example :param viz_enabled: (boolean) activ...
Generate symbolic graph for adversarial examples and return. :param x: The model's symbolic inputs. :param kwargs: Keyword arguments. See `parse_params` for documentation. def generate(self, x, **kwargs): """ Generate symbolic graph for adversarial examples and return. :param x: The model's symbo...
Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. Attack-specific parameters: :param eps: (optional float) maximum distortion of adversarial example compared to original input :param eps_iter: (optional float) step size for each att...
Run (optionally multi-replica, synchronous) training to minimize `loss` :param sess: TF session to use when training the graph :param loss: tensor, the loss to minimize :param x_train: numpy array with training inputs or tf Dataset :param y_train: numpy array with training outputs or tf Dataset :param init_al...
Calculate the average gradient for each shared variable across all towers. Note that this function provides a synchronization point across all towers. Args: tower_grads: List of lists of (gradient, variable) tuples. The outer list is over individual gradients. The inner list is over the gradient c...
Creates the symbolic graph of an adversarial example given the name of an attack. Simplifies creating the symbolic graph of an attack by defining dataset-specific parameters. Dataset-specific default parameters are used unless a different value is given in kwargs. :param model: an object of Model class :pa...
Log values to standard output and Tensorflow summary. :param tag: summary tag. :param val: (required float or numpy array) value to be logged. :param desc: (optional) additional description to be printed. def log_value(self, tag, val, desc=''): """ Log values to standard output and Tensorflow summ...
Evaluate the accuracy of the model on adversarial examples :param x: symbolic input to model. :param y: symbolic variable for the label. :param preds_adv: symbolic variable for the prediction on an adversarial example. :param X_test: NumPy array of test set inputs. :param Y_te...
Run the evaluation on multiple attacks. def eval_multi(self, inc_epoch=True): """ Run the evaluation on multiple attacks. """ sess = self.sess preds = self.preds x = self.x_pre y = self.y X_train = self.X_train Y_train = self.Y_train X_test = self.X_test Y_test = self.Y_test...
Runs some code that will crash if the GPUs / GPU driver are suffering from a common bug. This helps to prevent contaminating results in the rest of the library with incorrect calculations. def run_canary(): """ Runs some code that will crash if the GPUs / GPU driver are suffering from a common bug. This help...
Wraps a callable `f` in a function that warns that the function is deprecated. def _wrap(f): """ Wraps a callable `f` in a function that warns that the function is deprecated. """ def wrapper(*args, **kwargs): """ Issues a deprecation warning and passes through the arguments. """ warnings.warn(...
This function used to be needed to support tf 1.4 and early, but support for tf 1.4 and earlier is now dropped. :param op_func: expects the function to handle eg: tf.reduce_sum. :param input_tensor: The tensor to reduce. Should have numeric type. :param axis: The dimensions to reduce. If None (the default), ...
Wrapper around tf.nn.softmax_cross_entropy_with_logits_v2 to handle deprecated warning def softmax_cross_entropy_with_logits(sentinel=None, labels=None, logits=None, dim=-1): """ Wrapper around tf.nn...
Enforces size of perturbation on images, and compute hashes for all images. Args: dataset_batch_dir: directory with the images of specific dataset batch adv_dir: directory with generated adversarial images output_dir: directory where to copy result epsilon: size of perturbation Returns: dictio...
Downloads dataset, organize it by batches and rename images. Args: storage_client: instance of the CompetitionStorageClient image_batches: subclass of ImageBatchesBase with data about images target_dir: target directory, should exist and be empty local_dataset_copy: directory with local dataset copy,...
Saves file with target class for given dataset batch. Args: filename: output filename image_batches: instance of ImageBatchesBase with dataset batches batch_id: dataset batch ID def save_target_classes_for_batch(self, filename, ...
Function for min eigen vector using tf's full eigen decomposition. def tf_min_eig_vec(self): """Function for min eigen vector using tf's full eigen decomposition.""" # Full eigen decomposition requires the explicit psd matrix M _, matrix_m = self.dual_object.get_full_psd_matrix() [eig_vals, eig_vectors...
Function that returns smoothed version of min eigen vector. def tf_smooth_eig_vec(self): """Function that returns smoothed version of min eigen vector.""" _, matrix_m = self.dual_object.get_full_psd_matrix() # Easier to think in terms of max so negating the matrix [eig_vals, eig_vectors] = tf.self_adjo...
Computes the min eigen value and corresponding vector of matrix M. Args: use_tf_eig: Whether to use tf's default full eigen decomposition Returns: eig_vec: Minimum absolute eigen value eig_val: Corresponding eigen vector def get_min_eig_vec_proxy(self, use_tf_eig=False): """Computes the ...
Computes scipy estimate of min eigenvalue for matrix M. Returns: eig_vec: Minimum absolute eigen value eig_val: Corresponding eigen vector def get_scipy_eig_vec(self): """Computes scipy estimate of min eigenvalue for matrix M. Returns: eig_vec: Minimum absolute eigen value eig_val...
Create tensorflow op for running one step of descent. def prepare_for_optimization(self): """Create tensorflow op for running one step of descent.""" if self.params['eig_type'] == 'TF': self.eig_vec_estimate = self.get_min_eig_vec_proxy() elif self.params['eig_type'] == 'LZS': self.eig_vec_esti...
Run one step of gradient descent for optimization. Args: eig_init_vec_val: Start value for eigen value computations eig_num_iter_val: Number of iterations to run for eigen computations smooth_val: Value of smoothness parameter penalty_val: Value of penalty for the current step learnin...
Run the optimization, call run_one_step with suitable placeholders. Returns: True if certificate is found False otherwise def run_optimization(self): """Run the optimization, call run_one_step with suitable placeholders. Returns: True if certificate is found False otherwise ""...
Loads target classes. def load_target_class(input_dir): """Loads target classes.""" with tf.gfile.Open(os.path.join(input_dir, 'target_class.csv')) as f: return {row[0]: int(row[1]) for row in csv.reader(f) if len(row) >= 2}
Saves images to the output directory. Args: images: array with minibatch of images filenames: list of filenames without path If number of file names in this list less than number of images in the minibatch then only first len(filenames) images will be saved. output_dir: directory where to sav...
Run the sample attack def main(_): """Run the sample attack""" # Images for inception classifier are normalized to be in [-1, 1] interval, # eps is a difference between pixels so it should be in [0, 2] interval. # Renormalizing epsilon from [0, 255] to [0, 2]. eps = 2.0 * FLAGS.max_epsilon / 255.0 alpha = ...
Applies DeepFool to a batch of inputs :param sess: TF session :param x: The input placeholder :param pred: The model's sorted symbolic output of logits, only the top nb_candidate classes are contained :param logits: The model's unnormalized output tensor (the input to the softmax...
TensorFlow implementation of DeepFool. Paper link: see https://arxiv.org/pdf/1511.04599.pdf :param sess: TF session :param x: The input placeholder :param predictions: The model's sorted symbolic output of logits, only the top nb_candidate classes are contained :param logits: The model's ...
Generate symbolic graph for adversarial examples and return. :param x: The model's symbolic inputs. :param kwargs: See `parse_params` def generate(self, x, **kwargs): """ Generate symbolic graph for adversarial examples and return. :param x: The model's symbolic inputs. :param kwargs: See `pa...
:param nb_candidate: The number of classes to test against, i.e., deepfool only consider nb_candidate classes when attacking(thus accelerate speed). The nb_candidate classes are chosen according to the prediction confide...
PyFunc defined as given by Tensorflow :param func: Custom Function :param inp: Function Inputs :param Tout: Ouput Type of out Custom Function :param stateful: Calculate Gradients when stateful is True :param name: Name of the PyFunction :param grad: Custom Gradient Function :return: def _py_func_with_gra...
Convert a pytorch model into a tensorflow op that allows backprop :param model: A pytorch nn.Module object :param out_dims: The number of output dimensions (classes) for the model :return: A model function that maps an input (tf.Tensor) to the output of the model (tf.Tensor) def convert_pytorch_model_to_tf(mod...
PyTorch implementation of the clip_eta in utils_tf. :param eta: Tensor :param ord: np.inf, 1, or 2 :param eps: float def clip_eta(eta, ord, eps): """ PyTorch implementation of the clip_eta in utils_tf. :param eta: Tensor :param ord: np.inf, 1, or 2 :param eps: float """ if ord not in [np.inf, 1, ...
Get the label to use in generating an adversarial example for x. The kwargs are fed directly from the kwargs of the attack. If 'y' is in kwargs, then assume it's an untargeted attack and use that as the label. If 'y_target' is in kwargs and is not none, then assume it's a targeted attack and use that as the l...
Solves for the optimal input to a linear function under a norm constraint. Optimal_perturbation = argmax_{eta, ||eta||_{ord} < eps} dot(eta, grad) :param grad: Tensor, shape (N, d_1, ...). Batch of gradients :param eps: float. Scalar specifying size of constraint region :param ord: np.inf, 1, or 2. Order of n...
:param y: (optional) A tensor with the true labels for an untargeted attack. If None (and y_target is None) then use the original labels the classifier assigns. :param y_target: (optional) A tensor with the target labels for a targeted attack. :param beta: Trades off L2...
Perform the EAD attack on the given instance for the given targets. If self.targeted is true, then the targets represents the target labels If self.targeted is false, then targets are the original class labels def attack(self, imgs, targets): """ Perform the EAD attack on the given instance for the gi...
Prints `text` surrounded by a box made of *s def print_in_box(text): """ Prints `text` surrounded by a box made of *s """ print('') print('*' * (len(text) + 6)) print('** ' + text + ' **') print('*' * (len(text) + 6)) print('')
Validates the submission. def main(args): """ Validates the submission. """ print_in_box('Validating submission ' + args.submission_filename) random.seed() temp_dir = args.temp_dir delete_temp_dir = False if not temp_dir: temp_dir = tempfile.mkdtemp() logging.info('Created temporary directory: ...
Make a confidence report and save it to disk. def main(argv=None): """ Make a confidence report and save it to disk. """ try: _name_of_script, filepath = argv except ValueError: raise ValueError(argv) make_confidence_report(filepath=filepath, test_start=FLAGS.test_start, te...
Load a saved model, gather its predictions, and save a confidence report. This function works by running a single MaxConfidence attack on each example, using SPSA as the underyling optimizer. This is not intended to be a strong generic attack. It is intended to be a test to uncover gradient masking. :param...
Make a confidence report and save it to disk. def main(argv=None): """ Make a confidence report and save it to disk. """ try: _name_of_script, filepath = argv except ValueError: raise ValueError(argv) make_confidence_report_spsa(filepath=filepath, test_start=FLAGS.test_start, ...
This method creates a symoblic graph of the MadryEtAl attack on multiple GPUs. The graph is created on the first n GPUs. Stop gradient is needed to get the speed-up. This prevents us from being able to back-prop through the attack. :param x: A tensor with the input image. :param y_p: Ground truth ...
Facilitates testing this attack. def generate_np(self, x_val, **kwargs): """ Facilitates testing this attack. """ _, feedable, _feedable_types, hash_key = self.construct_variables(kwargs) if hash_key not in self.graphs: with tf.variable_scope(None, 'attack_%d' % len(self.graphs)): # ...
Take in a dictionary of parameters and applies attack-specific checks before saving them as attributes. Attack-specific parameters: :param ngpu: (required int) the number of GPUs available. :param kwargs: A dictionary of parameters for MadryEtAl attack. def parse_params(self, ngpu=1, **kwargs): ""...
Compute the accuracy of a TF model on some data :param sess: TF session to use when training the graph :param model: cleverhans.model.Model instance :param x: numpy array containing input examples (e.g. MNIST().x_test ) :param y: numpy array containing example labels (e.g. MNIST().y_test ) :param batch_size: ...
Return the model's classification of the input data, and the confidence (probability) assigned to each example. :param sess: tf.Session :param model: cleverhans.model.Model :param x: numpy array containing input examples (e.g. MNIST().x_test ) :param y: numpy array containing true labels (Needed only if u...
Report whether the model is correct and its confidence on each example in a dataset. :param sess: tf.Session :param model: cleverhans.model.Model :param x: numpy array containing input examples (e.g. MNIST().x_test ) :param y: numpy array containing example labels (e.g. MNIST().y_test ) :param batch_size: N...
Run attack on every example in a dataset. :param sess: tf.Session :param model: cleverhans.model.Model :param x: numpy array containing input examples (e.g. MNIST().x_test ) :param y: numpy array containing example labels (e.g. MNIST().y_test ) :param attack: cleverhans.attack.Attack :param attack_params: d...
Generic computation engine for evaluating an expression across a whole dataset, divided into batches. This function assumes that the work can be parallelized with one worker device handling one batch of data. If you need multiple devices per batch, use `batch_eval`. The tensorflow graph for multiple workers...
A helper function that computes a tensor on numpy inputs by batches. This version uses exactly the tensorflow graph constructed by the caller, so the caller can place specific ops on specific devices to implement model parallelism. Most users probably prefer `batch_eval_multi_worker` which maps a single-devic...
Makes sure a `y` argument is a vliad numpy dataset. def _check_y(y): """ Makes sure a `y` argument is a vliad numpy dataset. """ if not isinstance(y, np.ndarray): raise TypeError("y must be numpy array. Typically y contains " "the entire test set labels. Got " + str(y) + " of type " + s...
Read png images from input directory in batches. Args: input_dir: input directory batch_shape: shape of minibatch array, i.e. [batch_size, height, width, 3] Yields: filenames: list file names without path of each image Length of this list could be less than batch_size, in this case only fi...
Run the sample attack def main(_): """Run the sample attack""" batch_shape = [FLAGS.batch_size, FLAGS.image_height, FLAGS.image_width, 3] for filenames, images in load_images(FLAGS.input_dir, batch_shape): save_images(images, filenames, FLAGS.output_dir)
Creates a preprocessing graph for a batch given a function that processes a single image. :param images_batch: A tensor for an image batch. :param preproc_func: (optional function) A function that takes in a tensor and returns a preprocessed input. def preprocess_batch(images_batch, preproc_func=None): ...
:param x: A symbolic representation (Tensor) of the network input :return: A symbolic representation (Tensor) of the output logits (i.e., the values fed as inputs to the softmax layer). def get_logits(self, x, **kwargs): """ :param x: A symbolic representation (Tensor) of the network input :return:...
:param x: A symbolic representation (Tensor) of the network input :return: A symbolic representation (Tensor) of the predicted label def get_predicted_class(self, x, **kwargs): """ :param x: A symbolic representation (Tensor) of the network input :return: A symbolic representation (Tensor) of the predi...
:param x: A symbolic representation (Tensor) of the network input :return: A symbolic representation (Tensor) of the output probabilities (i.e., the output values produced by the softmax layer). def get_probs(self, x, **kwargs): """ :param x: A symbolic representation (Tensor) of the network input ...
Provides access to the model's parameters. :return: A list of all Variables defining the model parameters. def get_params(self): """ Provides access to the model's parameters. :return: A list of all Variables defining the model parameters. """ if hasattr(self, 'params'): return list(self...
Create all Variables to be returned later by get_params. By default this is a no-op. Models that need their fprop to be called for their params to be created can set `needs_dummy_fprop=True` in the constructor. def make_params(self): """ Create all Variables to be returned later by get_params. ...
Return a layer output. :param x: tensor, the input to the network. :param layer: str, the name of the layer to compute. :param **kwargs: dict, extra optional params to pass to self.fprop. :return: the content of layer `layer` def get_layer(self, x, layer, **kwargs): """Return a layer output. :p...
Takes in confidence values for predictions and correct labels for the data, plots a reliability diagram. :param confidence: nb_samples x nb_classes (e.g., output of softmax) :param labels: vector of nb_samples :param filepath: where to save the diagram :return: def plot_reliability_diagram(confidence, labels...
Initializes locality-sensitive hashing with FALCONN to find nearest neighbors in training data. def init_lsh(self): """ Initializes locality-sensitive hashing with FALCONN to find nearest neighbors in training data. """ self.query_objects = { } # contains the object that can be queried to find nea...
Given a data_activation dictionary that contains a np array with activations for each layer, find the knns in the training data. def find_train_knns(self, data_activations): """ Given a data_activation dictionary that contains a np array with activations for each layer, find the knns in the training da...
Given an dictionary of nb_data x nb_classes dimension, compute the nonconformity of each candidate label for each data point: i.e. the number of knns whose label is different from the candidate label. def nonconformity(self, knns_labels): """ Given an dictionary of nb_data x nb_classes dimension, compu...
Given an array of nb_data x nb_classes dimensions, use conformal prediction to compute the DkNN's prediction, confidence and credibility. def preds_conf_cred(self, knns_not_in_class): """ Given an array of nb_data x nb_classes dimensions, use conformal prediction to compute the DkNN's prediction, confi...
Performs a forward pass through the DkNN on an numpy array of data. def fprop_np(self, data_np): """ Performs a forward pass through the DkNN on an numpy array of data. """ if not self.calibrated: raise ValueError( "DkNN needs to be calibrated by calling DkNNModel.calibrate method once ...
Performs a forward pass through the DkNN on a TF tensor by wrapping the fprop_np method. def fprop(self, x): """ Performs a forward pass through the DkNN on a TF tensor by wrapping the fprop_np method. """ logits = tf.py_func(self.fprop_np, [x], tf.float32) return {self.O_LOGITS: logits}
Runs the DkNN on holdout data to calibrate the credibility metric. :param cali_data: np array of calibration data. :param cali_labels: np vector of calibration labels. def calibrate(self, cali_data, cali_labels): """ Runs the DkNN on holdout data to calibrate the credibility metric. :param cali_dat...
MNIST cleverhans tutorial :param nb_epochs: number of epochs to train model :param batch_size: size of training batches :param learning_rate: learning rate for training :return: an AccuracyReport object def mnist_tutorial(nb_epochs=NB_EPOCHS, batch_size=BATCH_SIZE, train_end=-1, test_end=-1,...
TensorFlow implementation for apply perturbations to input features based on salency maps :param i: index of first selected feature :param j: index of second selected feature :param X: a matrix containing our input features for our sample :param increase: boolean; true if we are increasing pixels, false other...
TensorFlow implementation for computing saliency maps :param grads_target: a matrix containing forward derivatives for the target class :param grads_other: a matrix where every element is the sum of forward derivatives over all non-target classes at that index :param s...
TensorFlow implementation of the foward derivative / Jacobian :param x: the input placeholder :param grads: the list of TF gradients returned by jacobian_graph() :param target: the target misclassification class :param X: numpy array with sample input :param nb_features: the number of features in the input ...
This class implements either the Basic Iterative Method (Kurakin et al. 2016) when rand_init is set to 0. or the Madry et al. (2017) method when rand_minmax is larger than 0. Paper link (Kurakin et al. 2016): https://arxiv.org/pdf/1607.02533.pdf Paper link (Madry et al. 2017): https://arxiv.org/pdf/1706.06083.p...
Batch normalization. def _batch_norm(name, x): """Batch normalization.""" with tf.name_scope(name): return tf.contrib.layers.batch_norm( inputs=x, decay=.9, center=True, scale=True, activation_fn=None, updates_collections=None, is_training=False)
Residual unit with 2 sub layers. def _residual(x, in_filter, out_filter, stride, activate_before_residual=False): """Residual unit with 2 sub layers.""" if activate_before_residual: with tf.variable_scope('shared_activation'): x = _batch_norm('init_bn', x) x = _relu(x, 0.1) orig...
L2 weight decay loss. def _decay(): """L2 weight decay loss.""" costs = [] for var in tf.trainable_variables(): if var.op.name.find('DW') > 0: costs.append(tf.nn.l2_loss(var)) return tf.add_n(costs)
Relu, with optional leaky support. def _relu(x, leakiness=0.0): """Relu, with optional leaky support.""" return tf.where(tf.less(x, 0.0), leakiness * x, x, name='leaky_relu')
Build the core model within the graph. def set_input_shape(self, input_shape): batch_size, rows, cols, input_channels = input_shape # assert self.mode == 'train' or self.mode == 'eval' """Build the core model within the graph.""" input_shape = list(input_shape) input_shape[0] = 1 dummy_batch = ...
Function to create variables for the projected dual object. Function that projects the input dual variables onto the feasible set. Returns: projected_dual: Feasible dual solution corresponding to current dual def create_projected_dual(self): """Function to create variables for the projected dual obje...