text stringlengths 81 112k |
|---|
Plots a set receiver/relative operating characteristic (ROC) curves from DistributedROC objects.
The ROC curve shows how well a forecast discriminates between two outcomes over a series of thresholds. It
features Probability of Detection (True Positive Rate) on the y-axis and Probability of False Detection
... |
Draws a performance diagram from a set of DistributedROC objects.
A performance diagram is a variation on the ROC curve in which the Probability of False Detection on the
x-axis has been replaced with the Success Ratio (1-False Alarm Ratio or Precision). The diagram also shows
the Critical Success Index (C... |
Plot reliability curves against a 1:1 diagonal to determine if probability forecasts are consistent with their
observed relative frequency.
Args:
rel_objs (list): List of DistributedReliability objects.
obj_labels (list): List of labels describing the forecast model associated with each curve.
... |
Plot reliability curves against a 1:1 diagonal to determine if probability forecasts are consistent with their
observed relative frequency. Also adds gray areas to show where the climatological probabilities lie and what
areas result in a positive Brier Skill Score.
Args:
rel_objs (list): List of D... |
Return a string value that the user enters. Raises exception for cancel.
def get_string(self, prompt, default_str=None) -> str:
"""Return a string value that the user enters. Raises exception for cancel."""
accept_event = threading.Event()
value_ref = [None]
def perform():
... |
Return a boolean value for accept/reject.
def __accept_reject(self, prompt, accepted_text, rejected_text, display_rejected):
"""Return a boolean value for accept/reject."""
accept_event = threading.Event()
result_ref = [False]
def perform():
def accepted():
... |
Returns the weight (w) using OLS of r * w = gp._ytr
def compute_weight(self, r, ytr=None, mask=None):
"""Returns the weight (w) using OLS of r * w = gp._ytr """
ytr = self._ytr if ytr is None else ytr
mask = self._mask if mask is None else mask
return compute_weight(r, ytr, mask) |
Test whether the predicted values are finite
def isfinite(self):
"Test whether the predicted values are finite"
if self._multiple_outputs:
if self.hy_test is not None:
r = [(hy.isfinite() and (hyt is None or hyt.isfinite()))
for hy, hyt in zip(self.hy, s... |
Parse an arbitrary block of python code to get the value of a named argument
from inside a function call
def value_of_named_argument_in_function(argument_name, function_name, search_str,
resolve_varname=False):
""" Parse an arbitrary block of python code to get the v... |
Search for a regex in a file
If return_match is True, return the found object instead of a boolean
def regex_in_file(regex, filepath, return_match=False):
""" Search for a regex in a file
If return_match is True, return the found object instead of a boolean
"""
file_content = get_file_content(fil... |
Search for a regex in a file contained within the package directory
If return_match is True, return the found object instead of a boolean
def regex_in_package_file(regex, filename, package_name, return_match=False):
""" Search for a regex in a file contained within the package directory
If return_match i... |
Test to see if a string is a URL or not, defined in this case as a string for which
urlparse returns a scheme component
>>> string_is_url('somestring')
False
>>> string_is_url('https://some.domain.org/path')
True
def string_is_url(test_str):
""" Test to see if a string is a URL or not, defined... |
Begin transaction state for item.
A transaction state is exists to prevent writing out to disk, mainly for performance reasons.
All changes to the object are delayed until the transaction state exits.
This method is thread safe.
def item_transaction(self, item) -> Transaction:
"""Begi... |
Insert a new data item into document model.
This method is NOT threadsafe.
def insert_data_item(self, before_index, data_item, auto_display: bool = True) -> None:
"""Insert a new data item into document model.
This method is NOT threadsafe.
"""
assert data_item is not None
... |
Remove data item from document model.
This method is NOT threadsafe.
def remove_data_item(self, data_item: DataItem.DataItem, *, safe: bool=False) -> typing.Optional[typing.Sequence]:
"""Remove data item from document model.
This method is NOT threadsafe.
"""
# remove data ite... |
Cascade delete an item.
Returns an undelete log that can be used to undo the cascade deletion.
Builds a cascade of items to be deleted and dependencies to be removed when the passed item is deleted. Then
removes computations that are no longer valid. Removing a computation may result in more d... |
Return the list of data items containing data that directly depends on data in this item.
def get_dependent_items(self, item) -> typing.List:
"""Return the list of data items containing data that directly depends on data in this item."""
with self.__dependency_tree_lock:
return copy.copy(se... |
Return the list of data items containing data that directly depends on data in this item.
def __get_deep_dependent_item_set(self, item, item_set) -> None:
"""Return the list of data items containing data that directly depends on data in this item."""
if not item in item_set:
item_set.add(it... |
Return the list of data items containing data that directly depends on data in this item.
def get_dependent_data_items(self, data_item: DataItem.DataItem) -> typing.List[DataItem.DataItem]:
"""Return the list of data items containing data that directly depends on data in this item."""
with self.__depen... |
Return a context object for a document-wide transaction.
def transaction_context(self):
"""Return a context object for a document-wide transaction."""
class DocumentModelTransaction:
def __init__(self, document_model):
self.__document_model = document_model
def ... |
Return a context manager to put the data item in a 'live state'.
def data_item_live(self, data_item):
""" Return a context manager to put the data item in a 'live state'. """
class LiveContextManager:
def __init__(self, manager, object):
self.__manager = manager
... |
Begins a live state for the data item.
The live state is propagated to dependent data items.
This method is thread safe. See slow_test_dependent_data_item_removed_while_live_data_item_becomes_unlive.
def begin_data_item_live(self, data_item):
"""Begins a live state for the data item.
... |
Ends a live state for the data item.
The live-ness property is propagated to dependent data items, similar to the transactions.
This method is thread safe.
def end_data_item_live(self, data_item):
"""Ends a live state for the data item.
The live-ness property is propagated to depende... |
Construct a data item reference.
Construct a data item reference and assign a data item to it. Update data item session id and session metadata.
Also connect the data channel processor.
This method is thread safe.
def __construct_data_item_reference(self, hardware_source: HardwareSource.Hardw... |
Create a new data item with computation specified by processing_id, inputs, and region_list_map.
The region_list_map associates a list of graphics corresponding to the required regions with a computation source (key).
def __make_computation(self, processing_id: str, inputs: typing.List[typing.Tuple[DisplayIte... |
Creates a color map for values in array
:param array: color map to interpolate
:param x: number of colors
:return: interpolated color map
def interpolate_colors(array: numpy.ndarray, x: int) -> numpy.ndarray:
"""
Creates a color map for values in array
:param array: color map to interpolate
... |
star any gist by providing gistID or gistname(for authenticated user)
def star(self, **args):
'''
star any gist by providing gistID or gistname(for authenticated user)
'''
if 'name' in args:
self.gist_name = args['name']
self.gist_id = self.getMyID(self.gist_name)
elif 'id' in args:
self.gist_id = a... |
fork any gist by providing gistID or gistname(for authenticated user)
def fork(self, **args):
'''
fork any gist by providing gistID or gistname(for authenticated user)
'''
if 'name' in args:
self.gist_name = args['name']
self.gist_id = self.getMyID(self.gist_name)
elif 'id' in args:
self.gist_id = a... |
Check a gist if starred by providing gistID or gistname(for authenticated user)
def checkifstar(self, **args):
'''
Check a gist if starred by providing gistID or gistname(for authenticated user)
'''
if 'name' in args:
self.gist_name = args['name']
self.gist_id = self.getMyID(self.gist_name)
elif 'id' ... |
Salva o arquivo de log decodificado.
:param str destino: (Opcional) Caminho completo para o arquivo onde os
dados dos logs deverão ser salvos. Se não informado, será criado
um arquivo temporário via :func:`tempfile.mkstemp`.
:param str prefix: (Opcional) Prefixo para o nome do ... |
Constrói uma :class:`RespostaExtrairLogs` a partir do retorno
informado.
:param unicode retorno: Retorno da função ``ExtrairLogs``.
def analisar(retorno):
"""Constrói uma :class:`RespostaExtrairLogs` a partir do retorno
informado.
:param unicode retorno: Retorno da função ``Ex... |
Loads time series of 2D data grids from each opened file. The code
handles loading a full time series from one file or individual time steps
from multiple files. Missing files are supported.
def load_data_old(self):
"""
Loads time series of 2D data grids from each opened file. The code... |
Load data from netCDF file objects or list of netCDF file objects. Handles special variable name formats.
Returns:
Array of data loaded from files in (time, y, x) dimensions, Units
def load_data(self):
"""
Load data from netCDF file objects or list of netCDF file objects. Handles s... |
Searches var list for variable name, checks other variable name format options.
Args:
variable (str): Variable being loaded
var_list (list): List of variables in file.
Returns:
Name of variable in file containing relevant data, and index of variable z-level if multi... |
Load data from flat data files containing total track information and information about each timestep.
The two sets are combined using merge operations on the Track IDs. Additional member information is gathered
from the appropriate member file.
Args:
mode: "train" or "forecast"
... |
Calculate a copula multivariate normal distribution from the training data for each group of ensemble members.
Distributions are written to a pickle file for later use.
Args:
output_file: Pickle file
model_names: Names of the tracking models
label_columns: Names of th... |
Fit machine learning models to predict whether or not hail will occur.
Args:
model_names: List of strings with the names for the particular machine learning models
model_objs: scikit-learn style machine learning model objects.
input_columns: list of the names of the columns u... |
Fit models to predict hail/no-hail and use cross-validation to determine the probaility threshold that
maximizes a skill score.
Args:
model_names: List of machine learning model names
model_objs: List of Scikit-learn ML models
input_columns: List of input variables i... |
Apply condition modelsto forecast data.
Args:
model_names: List of names associated with each condition model used for prediction
input_columns: List of columns in data used as input into the model
metadata_cols: Columns from input data that should be included in the data fra... |
Fits multitask machine learning models to predict the parameters of a size distribution
Args:
model_names: List of machine learning model names
model_objs: scikit-learn style machine learning model objects
input_columns: Training data columns used as input for ML model
... |
This calculates 2 principal components for the hail size distribution between the shape and scale parameters.
Separate machine learning models are fit to predict each component.
Args:
model_names: List of machine learning model names
model_objs: List of machine learning model ob... |
Make predictions using fitted size distribution models.
Args:
model_names: Name of the models for predictions
input_columns: Data columns used for input into ML models
metadata_cols: Columns from input data that should be included in the data frame with the predictions.
... |
Make predictions using fitted size distribution models.
Args:
model_names: Name of the models for predictions
input_columns: Data columns used for input into ML models
output_columns: Names of output columns
metadata_cols: Columns from input data that should be in... |
Fit size models to produce discrete pdfs of forecast hail sizes.
Args:
model_names: List of model names
model_objs: List of model objects
input_columns: List of input variables
output_column: Output variable name
output_start: Hail size bin start
... |
Apply size models to forecast data.
Args:
model_names:
input_columns:
metadata_cols:
data_mode:
def predict_size_models(self, model_names,
input_columns,
metadata_cols,
data_mode=... |
Fit machine learning models to predict track error offsets.
model_names:
model_objs:
input_columns:
output_columns:
output_ranges:
def fit_track_models(self,
model_names,
model_objs,
i... |
Save machine learning models to pickle files.
def save_models(self, model_path):
"""
Save machine learning models to pickle files.
"""
for group, condition_model_set in self.condition_models.items():
for model_name, model_obj in condition_model_set.items():
o... |
Load models from pickle files.
def load_models(self, model_path):
"""
Load models from pickle files.
"""
condition_model_files = sorted(glob(model_path + "*_condition.pkl"))
if len(condition_model_files) > 0:
for condition_model_file in condition_model_files:
... |
Output forecast values to geoJSON file format.
:param forecasts:
:param condition_model_names:
:param size_model_names:
:param track_model_names:
:param json_data_path:
:param out_path:
:return:
def output_forecasts_json(self, forecasts,
... |
Output hail forecast values to csv files by run date and ensemble member.
Args:
forecasts:
mode:
csv_path:
Returns:
def output_forecasts_csv(self, forecasts, mode, csv_path, run_date_format="%Y%m%d-%H%M"):
"""
Output hail forecast values to csv files... |
Carrega (ou recarrega) a biblioteca SAT. Se a convenção de chamada
ainda não tiver sido definida, será determinada pela extensão do
arquivo da biblioteca.
:raises ValueError: Se a convenção de chamada não puder ser determinada
ou se não for um valor válido.
def _carregar(self):
... |
Função ``AtivarSAT`` conforme ER SAT, item 6.1.1.
Ativação do equipamento SAT. Dependendo do tipo do certificado, o
procedimento de ativação é complementado enviando-se o certificado
emitido pela ICP-Brasil (:meth:`comunicar_certificado_icpbrasil`).
:param int tipo_certificado: Deverá s... |
Função ``ComunicarCertificadoICPBRASIL`` conforme ER SAT, item 6.1.2.
Envio do certificado criado pela ICP-Brasil.
:param str certificado: Conteúdo do certificado digital criado pela
autoridade certificadora ICP-Brasil.
:return: Retorna *verbatim* a resposta da função SAT.
... |
Função ``EnviarDadosVenda`` conforme ER SAT, item 6.1.3. Envia o
CF-e de venda para o equipamento SAT, que o enviará para autorização
pela SEFAZ.
:param dados_venda: Uma instância de :class:`~satcfe.entidades.CFeVenda`
ou uma string contendo o XML do CF-e de venda.
:return:... |
Função ``CancelarUltimaVenda`` conforme ER SAT, item 6.1.4. Envia o
CF-e de cancelamento para o equipamento SAT, que o enviará para
autorização e cancelamento do CF-e pela SEFAZ.
:param chave_cfe: String contendo a chave do CF-e a ser cancelado,
prefixada com o literal ``CFe``.
... |
Função ``ConsultarNumeroSessao`` conforme ER SAT, item 6.1.8.
Consulta o equipamento SAT por um número de sessão específico.
:param int numero_sessao: Número da sessão que se quer consultar.
:return: Retorna *verbatim* a resposta da função SAT.
:rtype: string
def consultar_numero_sess... |
Função ``ConfigurarInterfaceDeRede`` conforme ER SAT, item 6.1.9.
Configurção da interface de comunicação do equipamento SAT.
:param configuracao: Instância de :class:`~satcfe.rede.ConfiguracaoRede`
ou uma string contendo o XML com as configurações de rede.
:return: Retorna *verbat... |
Função ``AssociarAssinatura`` conforme ER SAT, item 6.1.10.
Associação da assinatura do aplicativo comercial.
:param sequencia_cnpj: Sequência string de 28 dígitos composta do CNPJ
do desenvolvedor da AC e do CNPJ do estabelecimento comercial
contribuinte, conforme ER SAT, item ... |
Função ``TrocarCodigoDeAtivacao`` conforme ER SAT, item 6.1.15.
Troca do código de ativação do equipamento SAT.
:param str novo_codigo_ativacao: O novo código de ativação escolhido
pelo contribuinte.
:param int opcao: Indica se deverá ser utilizado o código de ativação
... |
Loads the forecast files and gathers the forecast information into pandas DataFrames.
def load_forecasts(self):
"""
Loads the forecast files and gathers the forecast information into pandas DataFrames.
"""
forecast_path = self.forecast_json_path + "/{0}/{1}/".format(self.run_date.strfti... |
Loads the track total and step files and merges the information into a single data frame.
def load_obs(self):
"""
Loads the track total and step files and merges the information into a single data frame.
"""
track_total_file = self.track_data_csv_path + \
"track_total_{0}_{1... |
Match forecasts and observations.
def merge_obs(self):
"""
Match forecasts and observations.
"""
for model_type in self.model_types:
self.matched_forecasts[model_type] = {}
for model_name in self.model_names[model_type]:
self.matched_forecasts[mod... |
Calculates the cumulative ranked probability score (CRPS) on the forecast data.
Args:
model_type: model type being evaluated.
model_name: machine learning model being evaluated.
condition_model_name: Name of the hail/no-hail model being evaluated
condition_thresh... |
Calculates a ROC curve at a specified intensity threshold.
Args:
model_type: type of model being evaluated (e.g. size).
model_name: machine learning model being evaluated
intensity_threshold: forecast bin used as the split point for evaluation
prob_thresholds: Ar... |
Samples every forecast hail object and returns an empirical distribution of possible maximum hail sizes.
Hail sizes are sampled from each predicted gamma distribution. The total number of samples equals
num_samples * area of the hail object. To get the maximum hail size for each realization, the maximu... |
Get signature and params
def get_params(self):
"""Get signature and params
"""
params = {
'key': self.get_app_key(),
'uid': self.user_id,
'widget': self.widget_code
}
products_number = len(self.products)
if self.get_api_type() == sel... |
Retorna o número de horas, minutos e segundos a partir do total de
segundos informado.
.. sourcecode:: python
>>> hms(1)
(0, 0, 1)
>>> hms(60)
(0, 1, 0)
>>> hms(3600)
(1, 0, 0)
>>> hms(3601)
(1, 0, 1)
>>> hms(3661)
(1, 1, 1)
... |
Retorna um texto legível que descreve o total de horas, minutos e segundos
calculados a partir do total de segundos informados.
.. sourcecode:: python
>>> hms_humanizado(0)
'zero segundos'
>>> hms_humanizado(1)
'1 segundo'
>>> hms_humanizado(2)
'2 segundos'
... |
Assigns name to grib2 message number with name 'unknown'. Names based on NOAA grib2 abbreviations.
Args:
selected_variable(str): name of selected variable for loading
Names:
3: LCDC: Low Cloud Cover
4: MCDC: Medium Cloud Cover
5: HCDC: High Cloud Cover
... |
Loads data from grib2 file objects or list of grib2 file objects. Handles specific grib2 variable names
and grib2 message numbers.
Returns:
Array of data loaded from files in (time, y, x) dimensions, Units
def load_data(self):
"""
Loads data from grib2 fi... |
Load the forecast files into memory.
def load_forecasts(self):
"""
Load the forecast files into memory.
"""
run_date_str = self.run_date.strftime("%Y%m%d")
for model_name in self.model_names:
self.raw_forecasts[model_name] = {}
forecast_file = self.foreca... |
Aggregate the forecasts within the specified time windows.
def get_window_forecasts(self):
"""
Aggregate the forecasts within the specified time windows.
"""
for model_name in self.model_names:
self.window_forecasts[model_name] = {}
for size_threshold in self.siz... |
Loads observations and masking grid (if needed).
:param mask_threshold: Values greater than the threshold are kept, others are masked.
:return:
def load_obs(self, mask_threshold=0.5):
"""
Loads observations and masking grid (if needed).
:param mask_threshold: Values greater t... |
Use a dilation filter to grow positive observation areas by a specified number of grid points
:param dilation_radius: Number of times to dilate the grid.
:return:
def dilate_obs(self, dilation_radius):
"""
Use a dilation filter to grow positive observation areas by a specified number o... |
Generate ROC Curve objects for each machine learning model, size threshold, and time window.
:param prob_thresholds: Probability thresholds for the ROC Curve
:param dilation_radius: Number of times to dilate the observation grid.
:return: a dictionary of DistributedROC objects.
def roc_curves(... |
Output reliability curves for each machine learning model, size threshold, and time window.
:param prob_thresholds:
:param dilation_radius:
:return:
def reliability_curves(self, prob_thresholds):
"""
Output reliability curves for each machine learning model, size threshold, and... |
Loads map coordinates from netCDF or pickle file created by util.makeMapGrids.
Args:
map_file: Filename for the file containing coordinate information.
Returns:
Latitude and longitude grids as numpy arrays.
def load_map_coordinates(map_file):
"""
Loads map coordinates from netCDF or p... |
For a given day, this module interpolates hourly MRMS data to a specified latitude and
longitude grid, and saves the interpolated grids to CF-compliant netCDF4 files.
Args:
start_date (datetime.datetime): Date of data being interpolated
variable (str): MRMS variable
interp_type (st... |
Loads data from MRMS GRIB2 files and handles compression duties if files are compressed.
def load_data(self):
"""
Loads data from MRMS GRIB2 files and handles compression duties if files are compressed.
"""
data = []
loaded_dates = []
loaded_indices = []
for t, t... |
Interpolates MRMS data to a different grid using cubic bivariate splines
def interpolate_grid(self, in_lon, in_lat):
"""
Interpolates MRMS data to a different grid using cubic bivariate splines
"""
out_data = np.zeros((self.data.shape[0], in_lon.shape[0], in_lon.shape[1]))
for d... |
Finds the largest value within a given radius of a point on the interpolated grid.
Args:
in_lon: 2D array of longitude values
in_lat: 2D array of latitude values
radius: radius of influence for largest neighbor search in degrees
Returns:
Array of interpo... |
Calls the interpolation function and then saves the MRMS data to a netCDF file. It will also create
separate directories for each variable if they are not already available.
def interpolate_to_netcdf(self, in_lon, in_lat, out_path, date_unit="seconds since 1970-01-01T00:00",
inte... |
Return a generator for data.
:param bool sync: whether to wait for current frame to finish then collect next frame
NOTE: a new ndarray is created for each call.
def get_data_generator_by_id(hardware_source_id, sync=True):
"""
Return a generator for data.
:param bool sync: whether... |
Parse config file for aliases and automatically register them.
Returns True if alias file was found and parsed (successfully or unsuccessfully).
Returns False if alias file was not found.
Config file is a standard .ini file with a section
def parse_hardware_aliases_config_file(config_path):
... |
Configure an alias.
Callers can use the alias to refer to the instrument or hardware source.
The alias should be lowercase, no spaces. The display name may be used to display alias to
the user. Neither the original instrument or hardware source id and the alias id should ever
... |
Called from hardware source when new data arrives.
def update(self, data_and_metadata: DataAndMetadata.DataAndMetadata, state: str, sub_area, view_id) -> None:
"""Called from hardware source when new data arrives."""
self.__state = state
self.__sub_area = sub_area
hardware_source_id = ... |
Called from hardware source when data starts streaming.
def start(self):
"""Called from hardware source when data starts streaming."""
old_start_count = self.__start_count
self.__start_count += 1
if old_start_count == 0:
self.data_channel_start_event.fire() |
Connect to the data item reference, creating a crop graphic if necessary.
If the data item reference does not yet have an associated data item, add a
listener and wait for the data item to be set, then connect.
def connect_data_item_reference(self, data_item_reference):
"""Connect to the data ... |
Grab the earliest data from the buffer, blocking until one is available.
def grab_earliest(self, timeout: float=None) -> typing.List[DataAndMetadata.DataAndMetadata]:
"""Grab the earliest data from the buffer, blocking until one is available."""
timeout = timeout if timeout is not None else 10.0
... |
Grab the next data to finish from the buffer, blocking until one is available.
def grab_next(self, timeout: float=None) -> typing.List[DataAndMetadata.DataAndMetadata]:
"""Grab the next data to finish from the buffer, blocking until one is available."""
with self.__buffer_lock:
self.__buffe... |
Grab the next data to start from the buffer, blocking until one is available.
def grab_following(self, timeout: float=None) -> typing.List[DataAndMetadata.DataAndMetadata]:
"""Grab the next data to start from the buffer, blocking until one is available."""
self.grab_next(timeout)
return self.gr... |
Pause recording.
Thread safe and UI safe.
def pause(self) -> None:
"""Pause recording.
Thread safe and UI safe."""
with self.__state_lock:
if self.__state == DataChannelBuffer.State.started:
self.__state = DataChannelBuffer.State.paused |
Resume recording after pause.
Thread safe and UI safe.
def resume(self) -> None:
"""Resume recording after pause.
Thread safe and UI safe."""
with self.__state_lock:
if self.__state == DataChannelBuffer.State.paused:
self.__state = DataChannelBuffer.State.s... |
Takes a mapping and returns the n keys associated with the largest values
in descending order. If the mapping has fewer than n items, all its keys
are returned.
Equivalent to:
``next(zip(*heapq.nlargest(mapping.items(), key=lambda x: x[1])))``
Returns
-------
list of up to n keys from ... |
Return a new pqict mapping keys from an iterable to the same value.
def fromkeys(cls, iterable, value, **kwargs):
"""
Return a new pqict mapping keys from an iterable to the same value.
"""
return cls(((k, value) for k in iterable), **kwargs) |
Return a shallow copy of a pqdict.
def copy(self):
"""
Return a shallow copy of a pqdict.
"""
return self.__class__(self, key=self._keyfn, precedes=self._precedes) |
If ``key`` is in the pqdict, remove it and return its priority value,
else return ``default``. If ``default`` is not provided and ``key`` is
not in the pqdict, raise a ``KeyError``.
If ``key`` is not provided, remove the top item and return its key, or
raise ``KeyError`` if the pqdict i... |
Remove and return the item with highest priority. Raises ``KeyError``
if pqdict is empty.
def popitem(self):
"""
Remove and return the item with highest priority. Raises ``KeyError``
if pqdict is empty.
"""
heap = self._heap
position = self._position
tr... |
Return the item with highest priority. Raises ``KeyError`` if pqdict is
empty.
def topitem(self):
"""
Return the item with highest priority. Raises ``KeyError`` if pqdict is
empty.
"""
try:
node = self._heap[0]
except IndexError:
raise Ke... |
Add a new item. Raises ``KeyError`` if key is already in the pqdict.
def additem(self, key, value):
"""
Add a new item. Raises ``KeyError`` if key is already in the pqdict.
"""
if key in self._position:
raise KeyError('%s is already in the queue' % repr(key))
self[k... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.