text
stringlengths
81
112k
Return a json-clean list. Will log info message for failures. def clean_list(l0, clean_item_fn=None): """ Return a json-clean list. Will log info message for failures. """ clean_item_fn = clean_item_fn if clean_item_fn else clean_item l = list() for index, item in enumerate(l0): cle...
Return a json-clean tuple. Will log info message for failures. def clean_tuple(t0, clean_item_fn=None): """ Return a json-clean tuple. Will log info message for failures. """ clean_item_fn = clean_item_fn if clean_item_fn else clean_item l = list() for index, item in enumerate(t0): ...
Return a json-clean item or None. Will log info message for failure. def clean_item(i): """ Return a json-clean item or None. Will log info message for failure. """ itype = type(i) if itype == dict: return clean_dict(i) elif itype == list: return clean_list(i) elif itype...
Return a json-clean item or None. Will log info message for failure. def clean_item_no_list(i): """ Return a json-clean item or None. Will log info message for failure. """ itype = type(i) if itype == dict: return clean_dict(i, clean_item_no_list) elif itype == list: return ...
Sample the stack in a thread and print it at regular intervals. def sample_stack_all(count=10, interval=0.1): """Sample the stack in a thread and print it at regular intervals.""" def print_stack_all(l, ll): l1 = list() l1.append("*** STACKTRACE - START ***") code = [] for thre...
Decision function i.e. the raw data of the prediction def decision_function(self, X): "Decision function i.e. the raw data of the prediction" self._X = Model.convert_features(X) self._eval() return self._ind[0].hy
Evaluates a individual using recursion and self._pos as pointer def _eval(self): "Evaluates a individual using recursion and self._pos as pointer" pos = self._pos self._pos += 1 node = self._ind[pos] if isinstance(node, Function): args = [self._eval() for x in range(...
Random individual using full method def create_random_ind_full(self, depth=0): "Random individual using full method" lst = [] self._create_random_ind_full(depth=depth, output=lst) return lst
Select either function or terminal in grow method def grow_use_function(self, depth=0): "Select either function or terminal in grow method" if depth == 0: return False if depth == self._depth: return True return np.random.random() < 0.5
Random individual using grow method def create_random_ind_grow(self, depth=0): "Random individual using grow method" lst = [] self._depth = depth self._create_random_ind_grow(depth=depth, output=lst) return lst
Creates random population using ramped half-and-half method def create_population(self, popsize=1000, min_depth=2, max_depth=4, X=None): "Creates random population using ramped half-and-half method" import itertools args = [x for x in itertool...
Decision function i.e. the raw data of the prediction def decision_function(self, X, **kwargs): "Decision function i.e. the raw data of the prediction" if X is None: return self._hy_test X = self.convert_features(X) if len(X) < self.nvar: _ = 'Number of variables...
Median Fitness in the validation set def fitness_vs(self): "Median Fitness in the validation set" l = [x.fitness_vs for x in self.models] return np.median(l)
Directory to store the graphviz models def graphviz(self, directory, **kwargs): "Directory to store the graphviz models" import os if not os.path.isdir(directory): os.mkdir(directory) output = os.path.join(directory, 'evodag-%s') for k, m in enumerate(self.models): ...
Args: num_samples: Number of random samples at each grid point percentiles: Which percentiles to extract from the random samples Returns: def load_data(self, num_samples=1000, percentiles=None): """ Args: num_samples: Number of random samples at each grid po...
Calculate a probability based on the number of grid points in an area that exceed a threshold. Args: threshold: radius: Returns: def neighborhood_probability(self, threshold, radius): """ Calculate a probability based on the number of grid points in an area tha...
Encodes member percentile data to GRIB2 format. Returns: Series of GRIB2 messages def encode_grib2_percentile(self): """ Encodes member percentile data to GRIB2 format. Returns: Series of GRIB2 messages """ lscale = 1e6 grib_id_start = [...
Encodes member percentile data to GRIB2 format. Returns: Series of GRIB2 messages def encode_grib2_data(self): """ Encodes member percentile data to GRIB2 format. Returns: Series of GRIB2 messages """ lscale = 1e6 grib_id_start = [7, 0, ...
Loads data from each ensemble member. def load_data(self): """ Loads data from each ensemble member. """ for m, member in enumerate(self.members): mo = ModelOutput(self.ensemble_name, member, self.run_date, self.variable, self.start_date, self.en...
Calculate grid-point statistics across ensemble members. Args: consensus_type: mean, std, median, max, or percentile_nn Returns: EnsembleConsensus containing point statistic def point_consensus(self, consensus_type): """ Calculate grid-point statistics across e...
Determine the probability of exceeding a threshold at a grid point based on the ensemble forecasts at that point. Args: threshold: If >= threshold assigns a 1 to member, otherwise 0. Returns: EnsembleConsensus def point_probability(self, threshold): """ ...
Hourly probability of exceeding a threshold based on model values within a specified radius of a point. Args: threshold (float): probability of exceeding this threshold radius (int): distance from point in number of grid points to include in neighborhood calculation. sigmas ...
Calculates the neighborhood probability of exceeding a threshold at any time over the period loaded. Args: threshold (float): splitting threshold for probability calculatations radius (int): distance from point in number of grid points to include in neighborhood calculation. ...
Reads the track forecasts and converts them to grid point values based on random sampling. Args: grid_method: "gamma" by default num_samples: Number of samples drawn from predicted pdf condition_threshold: Objects are not written to the grid if condition model probability is...
Writes data to grib2 file. Currently, grib codes are set by hand to hail. Args: path: Path to directory containing grib2 files. Returns: def write_grib2(self, path): """ Writes data to grib2 file. Currently, grib codes are set by hand to hail. Args: pa...
Initializes netCDF file for writing Args: filename: Name of the netCDF file time_units: Units for the time variable in format "<time> since <date string>" Returns: Dataset object def init_file(self, filename, time_units="seconds since 1970-01-01T00:00"): """...
Outputs data to a netCDF file. If the file does not exist, it will be created. Otherwise, additional variables are appended to the current file Args: out_data: Full-path and name of output netCDF file def write_to_file(self, out_data): """ Outputs data to a netCDF file. If ...
Restore the workspace to the given workspace_uuid. If workspace_uuid is None then create a new workspace and use it. def restore(self, workspace_uuid): """ Restore the workspace to the given workspace_uuid. If workspace_uuid is None then create a new workspace and use it. ...
Create a new workspace, insert into document_model, and return it. def new_workspace(self, name=None, layout=None, workspace_id=None, index=None) -> WorkspaceLayout.WorkspaceLayout: """ Create a new workspace, insert into document_model, and return it. """ workspace = WorkspaceLayout.WorkspaceLayout() ...
Looks for a workspace with workspace_id. If none is found, create a new one, add it, and change to it. def ensure_workspace(self, name, layout, workspace_id): """Looks for a workspace with workspace_id. If none is found, create a new one, add it, and change to it. """ workspac...
Pose a dialog to name and create a workspace. def create_workspace(self) -> None: """ Pose a dialog to name and create a workspace. """ def create_clicked(text): if text: command = Workspace.CreateWorkspaceCommand(self, text) command.perform() ...
Pose a dialog to rename the workspace. def rename_workspace(self) -> None: """ Pose a dialog to rename the workspace. """ def rename_clicked(text): if len(text) > 0: command = Workspace.RenameWorkspaceCommand(self, text) command.perform() sel...
Pose a dialog to confirm removal then remove workspace. def remove_workspace(self): """ Pose a dialog to confirm removal then remove workspace. """ def confirm_clicked(): if len(self.document_model.workspaces) > 1: command = Workspace.RemoveWorkspaceCommand(self) ...
Pose a dialog to name and clone a workspace. def clone_workspace(self) -> None: """ Pose a dialog to name and clone a workspace. """ def clone_clicked(text): if text: command = Workspace.CloneWorkspaceCommand(self, text) command.perform() sel...
Used in drag/drop support. def __replace_displayed_display_item(self, display_panel, display_item, d=None) -> Undo.UndoableCommand: """ Used in drag/drop support. """ self.document_controller.replaced_display_panel_content = display_panel.save_contents() command = DisplayPanel.ReplaceDisplayPan...
Given a set of DistributedROC or DistributedReliability objects, this function performs a bootstrap resampling of the objects and returns n_boot aggregations of them. Args: score_objs: A list of DistributedROC or DistributedReliability objects. Objects must have an __add__ method n_boot (int): ...
Update the ROC curve with a set of forecasts and observations Args: forecasts: 1D array of forecast values observations: 1D array of observation values. def update(self, forecasts, observations): """ Update the ROC curve with a set of forecasts and observations ...
Ingest the values of another DistributedROC object into this one and update the statistics inplace. Args: other_roc: another DistributedROC object. def merge(self, other_roc): """ Ingest the values of another DistributedROC object into this one and update the statistics inplace. ...
Generate a ROC curve from the contingency table by calculating the probability of detection (TP/(TP+FN)) and the probability of false detection (FP/(FP+TN)). Returns: A pandas.DataFrame containing the POD, POFD, and the corresponding probability thresholds. def roc_curve(self): """...
Calculate the Probability of Detection and False Alarm Ratio in order to output a performance diagram. Returns: pandas.DataFrame containing POD, FAR, and probability thresholds. def performance_curve(self): """ Calculate the Probability of Detection and False Alarm Ratio in order t...
Calculate the Area Under the ROC Curve (AUC). def auc(self): """ Calculate the Area Under the ROC Curve (AUC). """ roc_curve = self.roc_curve() return np.abs(np.trapz(roc_curve['POD'], x=roc_curve['POFD']))
Calculate the maximum Critical Success Index across all probability thresholds Returns: The maximum CSI as a float def max_csi(self): """ Calculate the maximum Critical Success Index across all probability thresholds Returns: The maximum CSI as a float ...
Create an Array of ContingencyTable objects for each probability threshold. Returns: Array of ContingencyTable objects def get_contingency_tables(self): """ Create an Array of ContingencyTable objects for each probability threshold. Returns: Array of Contingenc...
Read the DistributedROC string and parse the contingency table values from it. Args: in_str (str): The string output from the __str__ method def from_str(self, in_str): """ Read the DistributedROC string and parse the contingency table values from it. Args: in_...
Update the statistics with a set of forecasts and observations. Args: forecasts (numpy.ndarray): Array of forecast probability values observations (numpy.ndarray): Array of observation values def update(self, forecasts, observations): """ Update the statistics with a se...
Ingest another DistributedReliability and add its contents to the current object. Args: other_rel: a Distributed reliability object. def merge(self, other_rel): """ Ingest another DistributedReliability and add its contents to the current object. Args: other_re...
Calculates the reliability diagram statistics. The key columns are Bin_Start and Positive_Relative_Freq Returns: pandas.DataFrame def reliability_curve(self): """ Calculates the reliability diagram statistics. The key columns are Bin_Start and Positive_Relative_Freq Return...
Calculate the components of the Brier score decomposition: reliability, resolution, and uncertainty. def brier_score_components(self): """ Calculate the components of the Brier score decomposition: reliability, resolution, and uncertainty. """ rel_curve = self.reliability_curve() ...
Calculate the Brier Score def brier_score(self): """ Calculate the Brier Score """ reliability, resolution, uncertainty = self.brier_score_components() return reliability - resolution + uncertainty
Calculate the Brier Skill Score def brier_skill_score(self): """ Calculate the Brier Skill Score """ reliability, resolution, uncertainty = self.brier_score_components() return (resolution - reliability) / uncertainty
Update the statistics with forecasts and observations. Args: forecasts: The discrete Cumulative Distribution Functions of observations: def update(self, forecasts, observations): """ Update the statistics with forecasts and observations. Args: forec...
Calculates the continuous ranked probability score. def crps(self): """ Calculates the continuous ranked probability score. """ return np.sum(self.errors["F_2"].values - self.errors["F_O"].values * 2.0 + self.errors["O_2"].values) / \ (self.thresholds.size * self.num_forecas...
Calculate the climatological CRPS. def crps_climo(self): """ Calculate the climatological CRPS. """ o_bar = self.errors["O"].values / float(self.num_forecasts) crps_c = np.sum(self.num_forecasts * (o_bar ** 2) - o_bar * self.errors["O"].values * 2.0 + sel...
Calculate the continous ranked probability skill score from existing data. def crpss(self): """ Calculate the continous ranked probability skill score from existing data. """ crps_f = self.crps() crps_c = self.crps_climo() return 1.0 - float(crps_f) / float(crps_c)
Checa em sequência os alertas registrados (veja :func:`registrar`) contra os dados da consulta ao status operacional do equipamento SAT. Este método irá então resultar em uma lista dos alertas ativos. :param cliente_sat: Uma instância de :class:`satcfe.clientelocal.ClienteSATLocal` ou :clas...
Return whether the metadata value for the given key exists. There are a set of predefined keys that, when used, will be type checked and be interoperable with other applications. Please consult reference documentation for valid keys. If using a custom key, we recommend structuring your keys in the '<group...
Get the metadata value for the given key. There are a set of predefined keys that, when used, will be type checked and be interoperable with other applications. Please consult reference documentation for valid keys. If using a custom key, we recommend structuring your keys in the '<group>.<attribute>' for...
Set the metadata value for the given key. There are a set of predefined keys that, when used, will be type checked and be interoperable with other applications. Please consult reference documentation for valid keys. If using a custom key, we recommend structuring your keys in the '<group>.<attribute>' for...
Delete the metadata value for the given key. There are a set of predefined keys that, when used, will be type checked and be interoperable with other applications. Please consult reference documentation for valid keys. If using a custom key, we recommend structuring your keys in the '<dotted>.<group>.<att...
Calculate the y-axis items dependent on the plot height. def calculate_y_ticks(self, plot_height): """Calculate the y-axis items dependent on the plot height.""" calibrated_data_min = self.calibrated_data_min calibrated_data_max = self.calibrated_data_max calibrated_data_range = calibr...
Calculate the x-axis items dependent on the plot width. def calculate_x_ticks(self, plot_width): """Calculate the x-axis items dependent on the plot width.""" x_calibration = self.x_calibration uncalibrated_data_left = self.__uncalibrated_left_channel uncalibrated_data_right = self.__...
Size the canvas item to the proper height. def size_to_content(self): """ Size the canvas item to the proper height. """ new_sizing = self.copy_sizing() new_sizing.minimum_height = 0 new_sizing.maximum_height = 0 axes = self.__axes if axes and axes.is_valid: ...
Size the canvas item to the proper width, the maximum of any label. def size_to_content(self, get_font_metrics_fn): """ Size the canvas item to the proper width, the maximum of any label. """ new_sizing = self.copy_sizing() new_sizing.minimum_width = 0 new_sizing.maximum_width = 0 ...
Size the canvas item to the proper width. def size_to_content(self): """ Size the canvas item to the proper width. """ new_sizing = self.copy_sizing() new_sizing.minimum_width = 0 new_sizing.maximum_width = 0 axes = self.__axes if axes and axes.is_valid: if a...
Load the content from a snippet file which exists in SNIPPETS_ROOT def get_snippet_content(snippet_name, **format_kwargs): """ Load the content from a snippet file which exists in SNIPPETS_ROOT """ filename = snippet_name + '.snippet' snippet_file = os.path.join(SNIPPETS_ROOT, filename) if not os.path....
Update the display values. Called from display panel. This method saves the display values and data and triggers an update. It should be as fast as possible. As a layer, this canvas item will respond to the update by calling prepare_render on the layer's rendering thread. Prepare render will c...
Change the view to encompass the channels and data represented by the given intervals. def __view_to_intervals(self, data_and_metadata: DataAndMetadata.DataAndMetadata, intervals: typing.List[typing.Tuple[float, float]]) -> None: """Change the view to encompass the channels and data represented by the given in...
Change the view to encompass the selected graphic intervals. def __view_to_selected_graphics(self, data_and_metadata: DataAndMetadata.DataAndMetadata) -> None: """Change the view to encompass the selected graphic intervals.""" all_graphics = self.__graphics graphics = [graphic for graphic_index...
Prepare the display. This method gets called by the canvas layout/draw engine after being triggered by a call to `update`. When data or display parameters change, the internal state of the line plot gets updated. This method takes that internal state and updates the child canvas items. ...
Map the mouse to the 1-d position within the line graph. def __update_cursor_info(self): """ Map the mouse to the 1-d position within the line graph. """ if not self.delegate: # allow display to work without delegate return if self.__mouse_in and self.__last_mouse: po...
Identify storms in gridded model output and extract uniform sized patches around the storm centers of mass. Returns: def find_model_patch_tracks(self): """ Identify storms in gridded model output and extract uniform sized patches around the storm centers of mass. Returns: """...
Identify storms at each model time step and link them together with object matching. Returns: List of STObjects containing model track information. def find_model_tracks(self): """ Identify storms at each model time step and link them together with object matching. Returns...
Identify objects from MRMS timesteps and link them together with object matching. Returns: List of STObjects containing MESH track information. def find_mrms_tracks(self): """ Identify objects from MRMS timesteps and link them together with object matching. Returns: ...
Match forecast and observed tracks. Args: model_tracks: obs_tracks: unique_matches: closest_matches: Returns: def match_tracks(self, model_tracks, obs_tracks, unique_matches=True, closest_matches=False): """ Match forecast and observed t...
Extract model attribute data for each model track. Storm variables are those that describe the model storm directly, such as radar reflectivity or updraft helicity. Potential variables describe the surrounding environmental conditions of the storm, and should be extracted from the timestep before the st...
Given forecast and observed track pairings, maximum hail sizes are associated with each paired forecast storm track timestep. If the duration of the forecast and observed tracks differ, then interpolation is used for the intermediate timesteps. Args: model_tracks: List of model trac...
Given a matching set of observed tracks for each model track, Args: model_tracks: obs_tracks: track_pairings: Returns: def match_hail_size_step_distributions(self, model_tracks, obs_tracks, track_pairings): """ Given a matching set of ob...
Calculates spatial and temporal translation errors between matched forecast and observed tracks. Args: model_tracks: List of model track STObjects obs_tracks: List of observed track STObjects track_pairings: List of tuples pairing forecast and observed tracks. ...
Override from PersistentObject. def persistent_object_context_changed(self): """ Override from PersistentObject. """ super().persistent_object_context_changed() def change_registration(registered_object, unregistered_object): if registered_object and registered_object.uuid == self....
Override from PersistentObject. def persistent_object_context_changed(self): """ Override from PersistentObject. """ super().persistent_object_context_changed() def register(): if self.__source is not None and self.__target is not None: assert not self.__binding ...
Override from PersistentObject. def persistent_object_context_changed(self): """ Override from PersistentObject. """ super().persistent_object_context_changed() def detach(): for listener in self.__interval_mutated_listeners: listener.close() self.__inte...
Return the text display for the given tree node. Based on number of keys associated with tree node. def __display_for_tree_node(self, tree_node): """ Return the text display for the given tree node. Based on number of keys associated with tree node. """ keys = tree_node.keys if len(keys) == 1: ...
Called from the root tree node when a new node is inserted into tree. This method creates properties to represent the node for display and inserts it into the item model controller. def __insert_child(self, parent_tree_node, index, tree_node): """ Called from the root tree node when a n...
Called from the root tree node when a node is removed from the tree. This method removes it into the item model controller. def __remove_child(self, parent_tree_node, index): """ Called from the root tree node when a node is removed from the tree. This method removes it into the ...
Update all tree item displays if needed. Usually for count updates. def update_all_nodes(self): """ Update all tree item displays if needed. Usually for count updates. """ item_model_controller = self.item_model_controller if item_model_controller: if self.__node_counts_dirty: ...
Called to handle selection changes in the tree widget. This method should be connected to the on_selection_changed event. This method builds a list of keys represented by all selected items. It then provides date_filter to filter data items based on the list of keys. It then sets th...
Called to handle changes to the text filter. :param text: The text for the filter. def text_filter_changed(self, text): """ Called to handle changes to the text filter. :param text: The text for the filter. """ text = text.strip() if text else None ...
Create a combined filter. Set the resulting filter into the document controller. def __update_filter(self): """ Create a combined filter. Set the resulting filter into the document controller. """ filters = list() if self.__date_filter: filters.append(self.__date...
Return the keys associated with this node by adding its key and then adding parent keys recursively. def __get_keys(self): """ Return the keys associated with this node by adding its key and then adding parent keys recursively. """ keys = list() tree_node = self while tree_node is not N...
Insert a value (data item) into this tree node and then its children. This will be called in response to a new data item being inserted into the document. Also updates the tree node's cumulative child count. def insert_value(self, keys, value): """ Insert a value...
Remove a value (data item) from this tree node and its children. Also updates the tree node's cumulative child count. def remove_value(self, keys, value): """ Remove a value (data item) from this tree node and its children. Also updates the tree node's cumulative child count...
From a 2D grid or time series of 2D grids, this method labels storm objects with either the Enhanced Watershed or Hysteresis methods. Args: data: the gridded data to be labeled. Should be a 2D numpy array in (y, x) coordinate order or a 3D numpy array in (time, y, x) coordinate order ...
After storms are labeled, this method extracts the storm objects from the grid and places them into STObjects. The STObjects contain intensity, location, and shape information about each storm at each timestep. Args: label_grid: 2D or 3D array output by label_storm_objects. data: 2D or 3D array...
After storms are labeled, this method extracts boxes of equal size centered on each storm from the grid and places them into STObjects. The STObjects contain intensity, location, and shape information about each storm at each timestep. Args: label_grid: 2D or 3D array output by label_storm_objects....
Given the output of extract_storm_objects, this method tracks storms through time and merges individual STObjects into a set of tracks. Args: storm_objects: list of list of STObjects that have not been tracked. times: List of times associated with each set of STObjects distance_componen...
Multiclass Peirce Skill Score (also Hanssen and Kuipers score, True Skill Score) def peirce_skill_score(self): """ Multiclass Peirce Skill Score (also Hanssen and Kuipers score, True Skill Score) """ n = float(self.table.sum()) nf = self.table.sum(axis=1) no = self.table...
Gerrity Score, which weights each cell in the contingency table by its observed relative frequency. :return: def gerrity_score(self): """ Gerrity Score, which weights each cell in the contingency table by its observed relative frequency. :return: """ k = self.table.shape...
Euclidean distance between the centroids of item_a and item_b. Args: item_a: STObject from the first set in ObjectMatcher time_a: Time integer being evaluated item_b: STObject from the second set in ObjectMatcher time_b: Time integer being evaluated max_value: Maximum distan...
Centroid distance with motion corrections. Args: item_a: STObject from the first set in ObjectMatcher time_a: Time integer being evaluated item_b: STObject from the second set in ObjectMatcher time_b: Time integer being evaluated max_value: Maximum distance value used as sca...
Euclidean distance between the pixels in item_a and item_b closest to each other. Args: item_a: STObject from the first set in ObjectMatcher time_a: Time integer being evaluated item_b: STObject from the second set in ObjectMatcher time_b: Time integer being evaluated max_va...