text
stringlengths
81
112k
Given a cardinality k and true label y, return random value in {1,...,k} \ {y}. def choose_other_label(k, y): """Given a cardinality k and true label y, return random value in {1,...,k} \ {y}.""" return choice(list(set(range(1, k + 1)) - set([y])))
Generate Gaussian bags of words based on label assignments Args: Y: np.array of true labels sigma: (float) the standard deviation of the Gaussian distributions bag_size: (list) the min and max length of bags of words Returns: X: (Tensor) a tensor of indices representing tokens ...
Generate a random tree-structured dependency graph based on a specified edge probability. Also create helper data struct mapping child -> parent. def _generate_edges(self, edge_prob): """Generate a random tree-structured dependency graph based on a specified edge probability. ...
Compute the conditional probability P_\theta(li | lj, y) = Z^{-1} exp( theta_{i|y} \indpm{ \lambda_i = Y } + \theta_{i,j} \indpm{ \lambda_i = \lambda_j } ) In other words, compute the conditional probability that LF i outputs ...
Generate an [n,m] label matrix with entries in {0,...,k} def _generate_label_matrix(self): """Generate an [n,m] label matrix with entries in {0,...,k}""" self.L = np.zeros((self.n, self.m)) self.Y = np.zeros(self.n, dtype=np.int64) for i in range(self.n): y = choice(self.k, ...
Compute the true clique conditional probabilities P(\lC | Y) by counting given L, Y; we'll use this as ground truth to compare to. Note that this generates an attribute, self.c_probs, that has the same definition as returned by `LabelModel.get_conditional_probs`. TODO: Can compute thes...
Execute fit and transform in sequence. def fit_transform(self, input, **fit_kwargs): """Execute fit and transform in sequence.""" self.fit(input, **fit_kwargs) X = self.transform(input) return X
Builds a vocabulary object based on the tokens in the input. Args: sents: A list of lists of tokens (representing sentences) Vocab kwargs include: max_size min_freq specials unk_init def fit(self, sents, **kwargs): """Builds a vocabu...
Converts lists of tokens into a Tensor of embedding indices. Args: sents: A list of lists of tokens (representing sentences) NOTE: These sentences should already be marked using the mark_entities() helper. Returns: X: A Tensor of shape (num_items,...
Argmax with random tie-breaking Args: x: a 1-dim numpy array Returns: the argmax index def rargmax(x, eps=1e-8): """Argmax with random tie-breaking Args: x: a 1-dim numpy array Returns: the argmax index """ idxs = np.where(abs(x - np.max(x, axis=0)) < eps)[...
Converts a 1D tensor of predicted labels into a 2D tensor of probabilistic labels Args: Y_h: an [n], or [n,1] tensor of predicted (int) labels in {1,...,k} k: the largest possible label in Y_h Returns: Y_s: a torch.FloatTensor of shape [n, k] where Y_s[i, j-1] is the probabilistic ...
Convert a 1d array-like (e.g,. list, tensor, etc.) to an np.ndarray def arraylike_to_numpy(array_like): """Convert a 1d array-like (e.g,. list, tensor, etc.) to an np.ndarray""" orig_type = type(array_like) # Convert to np.ndarray if isinstance(array_like, np.ndarray): pass elif isinstanc...
Convert a matrix from one label type to another Args: Y: A np.ndarray or torch.Tensor of labels (ints) source: The convention the labels are currently expressed in dest: The convention to convert the labels to Conventions: 'categorical': [0: abstain, 1: positive, 2: negative] ...
Converts a 2D [n,m] label matrix into an [n,m,k] one hot 3D tensor Note that in the returned 3D matrix, abstain votes continue to be represented by 0s, not 1s. Args: L: a [n,m] label matrix with categorical labels (0 = abstain) k: the number of classes that could appear in L if...
Merge dictionary y into a copy of x, overwriting elements of x when there is a conflict, except if the element is a dictionary, in which case recurse. misses: what to do if a key in y is not in x 'insert' -> set x[key] = value 'exception' -> raise an exception 'report' -> report t...
Splits inputs into multiple splits of defined sizes Args: inputs: correlated tuples/lists/arrays/matrices/tensors to split splits: list containing split sizes (fractions or counts); shuffle: if True, shuffle the data before splitting stratify_by: (None or an input) if not None, use ...
Utility to place data on GPU, where data could be a torch.Tensor, a tuple or list of Tensors, or a tuple or list of tuple or lists of Tensors def place_on_gpu(data): """Utility to place data on GPU, where data could be a torch.Tensor, a tuple or list of Tensors, or a tuple or list of tuple or lists of Tens...
TBD def _build(self, input_modules, middle_modules, head_modules): """ TBD """ self.input_layer = self._build_input_layer(input_modules) self.middle_layers = self._build_middle_layers(middle_modules) self.heads = self._build_task_heads(head_modules) # Construct ...
Creates and attaches task_heads to the appropriate network layers def _build_task_heads(self, head_modules): """Creates and attaches task_heads to the appropriate network layers""" # Make task head layer assignments num_layers = len(self.config["layer_out_dims"]) task_head_layers = self...
Returns a list of outputs for tasks 0,...t-1 Args: x: a [batch_size, ...] batch from X def forward(self, x): """Returns a list of outputs for tasks 0,...t-1 Args: x: a [batch_size, ...] batch from X """ head_outputs = [None] * self.t # Execute ...
Convert Y to t-length list of probabilistic labels if necessary def _preprocess_Y(self, Y, k=None): """Convert Y to t-length list of probabilistic labels if necessary""" # If not a list, convert to a singleton list if not isinstance(Y, list): if self.t != 1: msg = "F...
Returns the loss function to use in the train_model routine def _get_loss_fn(self): """Returns the loss function to use in the train_model routine""" criteria = self.criteria.to(self.config["device"]) loss_fn = lambda X, Y: sum( criteria(Y_tp, Y_t) for Y_tp, Y_t in zip(self.forward(...
Convert T label matrices with labels in 0...K_t to a one-hot format Here we can view e.g. the $(i,j)$ entries of the $T$ label matrices as a _label vector_ emitted by LF j for data point i. Args: L: a T-length list of [n,m] scipy.sparse label matrices with values in...
Returns the task marginals estimated by the model: a t-length list of [n,k_t] matrices where the (i,j) entry of the sth matrix represents the estimated P((Y_i)_s | \lambda_j(x_i)) Args: L: A t-length list of [n,m] scipy.sparse label matrices with values in {0,1,...,k...
Predicts (int) labels for an input X on all tasks Args: X: The input for the predict_proba method break_ties: A tie-breaking policy (see Classifier._break_ties()) return_probs: Return the predicted probabilities as well Returns: Y_p: An n-dim np.ndarray ...
Scores the predictive performance of the Classifier on all tasks Args: data: a Pytorch DataLoader, Dataset, or tuple with Tensors (X,Y): X: The input for the predict method Y: An [n] or [n, 1] torch.Tensor or np.ndarray of target labels in {1,...,...
The internal training routine called by train_model() after setup Args: train_data: a tuple of Tensors (X,Y), a Dataset, or a DataLoader of X (data) and Y (labels) for the train split loss_fn: the loss function to minimize (maps *data -> loss) valid_data: a t...
Serialize and save a model. Example: end_model = EndModel(...) end_model.train_model(...) end_model.save("my_end_model.pkl") def save(self, destination, **kwargs): """Serialize and save a model. Example: end_model = EndModel(...) end...
Deserialize and load a model. Example: end_model = EndModel.load("my_end_model.pkl") end_model.score(...) def load(source, **kwargs): """Deserialize and load a model. Example: end_model = EndModel.load("my_end_model.pkl") end_model.score(...) ...
This model resume training of a classifier by reloading the appropriate state_dicts for each model Args: train_data: a tuple of Tensors (X,Y), a Dataset, or a DataLoader of X (data) and Y (labels) for the train split model_path: the path to the saved checpoint for resumin...
Restores the model and optimizer states This helper function restores the model's state to a given iteration so that a user can resume training at any epoch. Args: restore_state: a state_dict dictionary def _restore_training_state(self, restore_state): """Restores the mode...
Converts input data to the appropriate Dataset def _create_dataset(self, *data): """Converts input data to the appropriate Dataset""" # Make sure data is a tuple of dense tensors data = [self._to_torch(x, dtype=torch.FloatTensor) for x in data] return TensorDataset(*data)
Converts input data into a DataLoader def _create_data_loader(self, data, **kwargs): """Converts input data into a DataLoader""" if data is None: return None # Set DataLoader config # NOTE: Not applicable if data is already a DataLoader config = { **self...
Computes predictions in batch, given a labeled dataset Args: data: a Pytorch DataLoader, Dataset, or tuple with Tensors (X,Y): X: The input for the predict method Y: An [n] or [n, 1] torch.Tensor or np.ndarray of target labels in {1,...,k} ...
Break ties in each row of a tensor according to the specified policy Args: Y_s: An [n, k] np.ndarray of probabilities break_ties: A tie-breaking policy: "abstain": return an abstain vote (0) "random": randomly choose among the tied options ...
Converts a None, list, np.ndarray, or torch.Tensor to np.ndarray; also handles converting sparse input to dense. def _to_numpy(Z): """Converts a None, list, np.ndarray, or torch.Tensor to np.ndarray; also handles converting sparse input to dense.""" if Z is None: return Z ...
Converts a None, list, np.ndarray, or torch.Tensor to torch.Tensor; also handles converting sparse input to dense. def _to_torch(Z, dtype=None): """Converts a None, list, np.ndarray, or torch.Tensor to torch.Tensor; also handles converting sparse input to dense.""" if Z is None: ...
Prints a warning statement just once Args: msg: The warning message msg_name: [optional] The name of the warning. If None, the msg_name will be the msg itself. def warn_once(self, msg, msg_name=None): """Prints a warning statement just once Args: ...
Stack a list of np.ndarrays along the first axis, returning an np.ndarray; note this is mainly for smooth hanlding of the multi-task setting. def _stack_batches(X): """Stack a list of np.ndarrays along the first axis, returning an np.ndarray; note this is mainly for smooth hanlding of t...
TBD def _build(self, input_module, middle_modules, head_module): """ TBD """ input_layer = self._build_input_layer(input_module) middle_layers = self._build_middle_layers(middle_modules) # Construct list of layers layers = [input_layer] if middle_layers ...
Convert Y to prob labels if necessary def _preprocess_Y(self, Y, k): """Convert Y to prob labels if necessary""" Y = Y.clone() # If preds, convert to probs if Y.dim() == 1 or Y.shape[1] == 1: Y = pred_to_prob(Y.long(), k=k) return Y
Returns a [n, k] tensor of probs (probabilistic labels). def predict_proba(self, X): """Returns a [n, k] tensor of probs (probabilistic labels).""" return F.softmax(self.forward(X), dim=1).data.cpu().numpy()
Execute sparse linear layer Args: X: an [n, h] torch.LongTensor containing up to h indices of features whose weights should be looked up and used in a sparse linear multiplication. def forward(self, X): """Execute sparse linear layer Args: ...
Create train/dev/test splits (mapped to split numbers) :param conn_str: :param candidate_def: :param word_dict: :param train: :param dev: :param test: :param use_lfs: :param pretrained_word_dict: :param max_seq_len: :return: def splits( ...
Convert Snorkel candidates to marked up sequences :param c: :param markers: :return: def _mark_entities(self, c, markers): """ Convert Snorkel candidates to marked up sequences :param c: :param markers: :return: """ sent = c.get_parent(...
Include terms available via pretrained embeddings :param pretrained_word_dict: :param candidates: :return: def _include_pretrained_vocab(self, pretrained_word_dict, candidates): """ Include terms available via pretrained embeddings :param pretrained_word_dict: ...
Initalize symbol table dictionary :param sentences: :param markers: :return: def _build_vocab(self, sentences, markers=[]): """ Initalize symbol table dictionary :param sentences: :param markers: :return: """ from snorkel.learning.pytor...
Run some basic checks on L. def _check_L(self, L): """Run some basic checks on L.""" # TODO: Take this out? if issparse(L): L = L.todense() # Check for correct values, e.g. warning if in {-1,0,1} if np.any(L < 0): raise ValueError("L must have values in ...
Convert a label matrix with labels in 0...k to a one-hot format Args: L: An [n,m] scipy.sparse label matrix with values in {0,1,...,k} Returns: L_ind: An [n,m*k] dense np.ndarray with values in {0,1} Note that no column is required for 0 (abstain) labels. def _create_...
Returns an augmented version of L where each column is an indicator for whether a certain source or clique of sources voted in a certain pattern. Args: L: An [n,m] scipy.sparse label matrix with values in {0,1,...,k} def _get_augmented_label_matrix(self, L, higher_order=False): ...
Build mask applied to O^{-1}, O for the matrix approx constraint def _build_mask(self): """Build mask applied to O^{-1}, O for the matrix approx constraint""" self.mask = torch.ones(self.d, self.d).byte() for ci in self.c_data.values(): si, ei = ci["start_index"], ci["end_index"] ...
Form the overlaps matrix, which is just all the different observed combinations of values of pairs of sources Note that we only include the k non-abstain values of each source, otherwise the model not minimal --> leads to singular matrix def _generate_O(self, L): """Form the overlaps m...
Form the *inverse* overlaps matrix def _generate_O_inv(self, L): """Form the *inverse* overlaps matrix""" self._generate_O(L) self.O_inv = torch.from_numpy(np.linalg.inv(self.O.numpy())).float()
Initialize the learned params - \mu is the primary learned parameter, where each row corresponds to the probability of a clique C emitting a specific combination of labels, conditioned on different values of Y (for each column); that is: self.mu[i*self.k + j, y] = P(\lambda_i = j |...
Returns the full conditional probabilities table as a numpy array, where row i*(k+1) + ly is the conditional probabilities of source i emmiting label ly (including abstains 0), conditioned on different values of Y, i.e.: c_probs[i*(k+1) + ly, y] = P(\lambda_i = ly | Y = y) ...
Returns the [n,k] matrix of label probabilities P(Y | \lambda) Args: L: An [n,m] scipy.sparse label matrix with values in {0,1,...,k} def predict_proba(self, L): """Returns the [n,k] matrix of label probabilities P(Y | \lambda) Args: L: An [n,m] scipy.sparse label matr...
Get the model's estimate of Q = \mu P \mu^T We can then separately extract \mu subject to additional constraints, e.g. \mu P 1 = diag(O). def get_Q(self): """Get the model's estimate of Q = \mu P \mu^T We can then separately extract \mu subject to additional constraints, e.g. ...
L2 loss centered around mu_init, scaled optionally per-source. In other words, diagonal Tikhonov regularization, ||D(\mu-\mu_{init})||_2^2 where D is diagonal. Args: - l2: A float or np.array representing the per-source regularization strengths to use d...
Set a prior for the class balance In order of preference: 1) Use user-provided class_balance 2) Estimate balance from Y_dev 3) Assume uniform class distribution def _set_class_balance(self, class_balance, Y_dev): """Set a prior for the class balance In order of prefere...
Train the model (i.e. estimate mu) in one of two ways, depending on whether source dependencies are provided or not: Args: L_train: An [n,m] scipy.sparse matrix with values in {0,1,...,k} corresponding to labels from supervision sources on the training set ...
Transforms the input label matrix to a three-way overlaps tensor. Args: L: (np.array) An n x m array of LF output labels, in {0,...,k} if self.abstains, else in {1,...,k}, generated by m conditionally independent LFs on n data points Outputs: O: ...
Get the mask for the three-way overlaps matrix O, which is 0 when indices i,j,k are not unique def get_mask(self, m): """Get the mask for the three-way overlaps matrix O, which is 0 when indices i,j,k are not unique""" mask = torch.ones((m, m, m, self.k_lf, self.k_lf, self.k_lf)).byte()...
Store or calculate char_offsets, adding the offset of the doc end def _get_char_offsets(self, char_offsets): """Store or calculate char_offsets, adding the offset of the doc end""" if char_offsets: char_offsets = char_offsets char_offsets.append(len(self.text)) else: ...
Args: L: An [n, m] scipy.sparse matrix of labels Returns: output: A [n, k] np.ndarray of probabilistic labels def predict_proba(self, L): """ Args: L: An [n, m] scipy.sparse matrix of labels Returns: output: A [n, k] np.ndarray of probabil...
Args: balance: A 1d arraylike that sums to 1, corresponding to the (possibly estimated) class balance. def train_model(self, balance, *args, **kwargs): """ Args: balance: A 1d arraylike that sums to 1, corresponding to the (possibly estimated) cla...
Iterator over values in feasible set def feasible_set(self): """Iterator over values in feasible set""" for y in itertools.product(*[range(1, k + 1) for k in self.K]): yield np.array(y)
Calculate (micro) accuracy. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels for which e...
Calculate (global) coverage. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels for which ...
Calculate precision for a single class. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels...
Calculate recall for a single class. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels fo...
Calculate recall for a single class. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels fo...
Compute the ROC AUC score, given the gold labels and predicted probs. Args: gold: A 1d array-like of gold labels probs: A 2d array-like of predicted probabilities ignore_in_gold: A list of labels for which elements having that gold label will be ignored. Returns: ro...
Remove from gold and pred all items with labels designated to ignore. def _drop_ignored(gold, pred, ignore_in_gold, ignore_in_pred): """Remove from gold and pred all items with labels designated to ignore.""" keepers = np.ones_like(gold).astype(bool) for x in ignore_in_gold: keepers *= np.where(gol...
Dump JSON to file def write(self): """Dump JSON to file""" with open(self.log_path, "w") as f: json.dump(self.log_dict, f, indent=1)
Predicts int labels for an input X on all tasks Args: X: The input for the predict_proba method break_ties: A tie-breaking policy return_probs: Return the predicted probabilities as well Returns: Y_p: A t-length list of n-dim np.ndarrays of predictions i...
Scores the predictive performance of the Classifier on all tasks Args: data: either a Pytorch Dataset, DataLoader or tuple supplying (X,Y): X: The input for the predict method Y: A t-length list of [n] or [n, 1] np.ndarrays or torch.Tensors of gold ...
Scores the predictive performance of the Classifier on task t Args: X: The input for the predict_task method Y: A [n] or [n, 1] np.ndarray or torch.Tensor of gold labels in {1,...,K_t} t: The task index to score metric: The metric with which to sc...
Predicts int labels for an input X on task t Args: X: The input for the predict_task_proba method t: The task index to predict Returns: An n-dim tensor of int predictions for the specified task def predict_task(self, X, t=0, break_ties="random", **kwargs): "...
Predicts probabilistic labels for an input X on task t Args: X: The input for the predict_proba method t: The task index to predict for which to predict probabilities Returns: An [n, K_t] tensor of predictions for task t NOTE: By default, this method calls pr...
Converts a None, list, np.ndarray, or torch.Tensor to torch.Tensor def _to_torch(Z, dtype=None): """Converts a None, list, np.ndarray, or torch.Tensor to torch.Tensor""" if isinstance(Z, list): return [Classifier._to_torch(z, dtype=dtype) for z in Z] else: return Classif...
Converts a None, list, np.ndarray, or torch.Tensor to np.ndarray def _to_numpy(Z): """Converts a None, list, np.ndarray, or torch.Tensor to np.ndarray""" if isinstance(Z, list): return [Classifier._to_numpy(z) for z in Z] else: return Classifier._to_numpy(Z)
Prints scheduler for user to read. def pretty_print_schedule(self, hyperband_schedule, describe_hyperband=True): """ Prints scheduler for user to read. """ print("=========================================") print("| Hyperband Schedule |") print("======...
Gets the largest hyperband schedule within target_budget. This is required since the original hyperband algorithm uses R, the maximum number of resources per configuration. TODO(maxlam): Possibly binary search it if this becomes a bottleneck. Args: budget: total budget of th...
Generate hyperband schedule according to the paper. Args: R: maximum resources per config. eta: proportion of configruations to discard per iteration of successive halving. Returns: hyperband schedule, which is represented as a list of brackets, wher...
Performs hyperband search according to the generated schedule. At the beginning of each bracket, we generate a list of random configurations and perform successive halving on it; we repeat this process for the number of brackets in the schedule. Args: init_args: (li...
Adds special markers around tokens at specific positions (e.g., entities) Args: tokens: A list of tokens (the sentence) positions: 1) A list of inclusive ranges (tuples) corresponding to the token ranges of the entities in order. (Assumes each entity has only one...
Clears the state, starts clock def _clear_state(self, seed=None): """Clears the state, starts clock""" self.start_time = time() self.run_stats = [] self.best_index = -1 self.best_score = -1 self.best_config = None # Note: These must be set at the start of self.s...
Returns self.run_stats over search params as pandas dataframe. def run_stats_df(self): """Returns self.run_stats over search params as pandas dataframe.""" run_stats_df = [] for x in self.run_stats: search_results = {**x["search_params"]} search_results["score"] = x["sc...
Args: search_space: see config_generator() documentation valid_data: a tuple of Tensors (X,Y), a Dataset, or a DataLoader of X (data) and Y (labels) for the dev split init_args: (list) positional args for initializing the model train_args: (list) positiona...
Generates config dicts from the given search space Args: search_space: (dict) A dictionary of parameters to search over. See note below for more details. max_search: (int) The maximum number of configurations to search. If max_search is None, do a full gr...
Given label matrix L_aug and labels Y, compute the true mu params. Args: L: (np.array {0,1}) [n, d] The augmented (indicator) label matrix Y: (np.array int) [n] The true labels in {1,...,k} k: (int) Cardinality p: (np.array float) [k] The class balance def compute_mu(L_aug, Y, k, p...
Given label matrix L_aug and labels Y, compute the covariance. Args: L: (np.array {0,1}) [n, d] The augmented (indicator) label matrix Y: (np.array int) [n] The true labels in {1,...,k} k: (int) Cardinality p: (np.array float) [k] The class balance def compute_covariance(L_aug, Y, ...
Given label matrix L and labels Y, compute the covariance. Args: L: (np.array) [n, d] The augmented (indicator) label matrix Y: (np.array int) [n] The true labels in {1,...,k} def compute_inv_covariance(L_aug, Y, k, p): """Given label matrix L and labels Y, compute the covariance. Args: ...
Pretty printing for numpy matrix X def print_matrix(X, decimals=1): """Pretty printing for numpy matrix X""" for row in np.round(X, decimals=decimals): print(row)
Returns an indicator vector where ith element = 1 if x_i is labeled by at least two LFs that give it disagreeing labels. def _conflicted_data_points(L): """Returns an indicator vector where ith element = 1 if x_i is labeled by at least two LFs that give it disagreeing labels.""" m = sparse.diags(np.rav...
Return the polarities of each LF based on evidence in a label matrix. Args: L: an n x m scipy.sparse matrix where L_{i,j} is the label given by the jth LF to the ith candidate def lf_polarities(L): """Return the polarities of each LF based on evidence in a label matrix. Args: ...
Return the **fraction of items each LF labels that are also labeled by at least one other LF.** Note that the maximum possible overlap fraction for an LF is the LF's coverage, unless `normalize_by_coverage=True`, in which case it is 1. Args: L: an n x m scipy.sparse matrix where L_{i,j} is th...
Return the **fraction of items each LF labels that are also given a different (non-abstain) label by at least one other LF.** Note that the maximum possible conflict fraction for an LF is the LF's overlaps fraction, unless `normalize_by_overlaps=True`, in which case it is 1. Args: ...
Return the **empirical accuracy** against a set of labels Y (e.g. dev set) for each LF. Args: L: an n x m scipy.sparse matrix where L_{i,j} is the label given by the jth LF to the ith candidate Y: an [n] or [n, 1] np.ndarray of gold labels def lf_empirical_accuracies(L, Y): """R...
Returns a pandas DataFrame with the various per-LF statistics. Args: L: an n x m scipy.sparse matrix where L_{i,j} is the label given by the jth LF to the ith candidate Y: an [n] or [n, 1] np.ndarray of gold labels. If provided, the empirical accuracy for each LF will be cal...