text stringlengths 81 112k |
|---|
Create iterator objects for splits of the WikiText-2 dataset.
This is the simplest way to use the dataset, and assumes common
defaults for field, vocabulary, and iterator parameters.
Arguments:
batch_size: Batch size.
bptt_len: Length of sequences for backpropagation th... |
Apply _only_ the convert_token function of the current pipeline
to the input. If the input is a list, a list with the results of
applying the `convert_token` function to all input elements is
returned.
Arguments:
x: The input to apply the convert_token function to.
... |
Add a Pipeline to be applied before this processing pipeline.
Arguments:
pipeline: The Pipeline or callable to apply before this
Pipeline.
def add_before(self, pipeline):
"""Add a Pipeline to be applied before this processing pipeline.
Arguments:
pipeli... |
Add a Pipeline to be applied after this processing pipeline.
Arguments:
pipeline: The Pipeline or callable to apply after this
Pipeline.
def add_after(self, pipeline):
"""Add a Pipeline to be applied after this processing pipeline.
Arguments:
pipeline: ... |
Check that the split ratio argument is not malformed
def check_split_ratio(split_ratio):
"""Check that the split ratio argument is not malformed"""
valid_ratio = 0.
if isinstance(split_ratio, float):
# Only the train set relative ratio is provided
# Assert in bounds, validation size is zero... |
Create train-test(-valid?) splits from the instance's examples.
Arguments:
split_ratio (float or List of floats): a number [0, 1] denoting the amount
of data to be used for the training split (rest is used for validation),
or a list of numbers denoting the relative s... |
Download and unzip an online archive (.zip, .gz, or .tgz).
Arguments:
root (str): Folder to download data to.
check (str or None): Folder whose existence indicates
that the dataset has already been downloaded, or
None to check the existence of root/{cls.n... |
Remove unknown words from dataset examples with respect to given field.
Arguments:
field_names (list(str)): Within example only the parts with field names in
field_names will have their unknown words deleted.
def filter_examples(self, field_names):
"""Remove unknown words f... |
Create dataset objects for splits of the SNLI dataset.
This is the most flexible way to use the dataset.
Arguments:
text_field: The field that will be used for premise and hypothesis
data.
label_field: The field that will be used for label data.
pars... |
Create iterator objects for splits of the SNLI dataset.
This is the simplest way to use the dataset, and assumes common
defaults for field, vocabulary, and iterator parameters.
Arguments:
batch_size: Batch size.
device: Device to create batches on. Use -1 for CPU and No... |
Decode given bytes using specified escaping method.
:param byte_data: The byte-like object with bytes to decode.
:param escape: The escape method to use.
:param skip_printable: If True, don't escape byte_data with all 'printable ASCII' bytes. Defaults to False.
:return: New unicode string, escaped with ... |
Apply the specified escape method on the given bytes.
:param byte_data: The byte-like object with bytes to escape.
:param escape: The escape method to use.
:param skip_printable: If True, don't escape byte_data with all 'printable ASCII' bytes. Defaults to False.
:return: new bytes object with the escap... |
decode module id to string
based on @antirez moduleTypeNameByID function from redis/src/module.c
:param module_id: 64bit integer
:return: string
def _decode_module_id(self, module_id):
"""
decode module id to string
based on @antirez moduleTypeNameByID function from redi... |
Return the step value in format suitable for display.
def format_step(step, zero_prefix=False):
"""Return the step value in format suitable for display."""
if isinstance(step, int):
return "{:06}".format(step) if zero_prefix else "{}".format(step)
elif isinstance(step, tuple):
return "{:04}... |
Record metrics at a specific step. E.g.
my_history.log(34, loss=2.3, accuracy=0.2)
Okay to call multiple times for the same step. New values overwrite
older ones if they have the same metric name.
step: An integer or tuple of integers. If a tuple, then the first
value is c... |
Returns the total period between when the first and last steps
where logged. This usually correspnods to the total training time
if there were no gaps in the training.
def get_total_time(self):
"""Returns the total period between when the first and last steps
where logged. This usually ... |
Returns nodes connecting out of the given node (or list of nodes).
def outgoing(self, node):
"""Returns nodes connecting out of the given node (or list of nodes)."""
nodes = node if isinstance(node, list) else [node]
node_ids = [self.id(n) for n in nodes]
# Find edges outgoing from this... |
Returns all nodes that share the same parent (incoming node) with
the given node, including the node itself.
def siblings(self, node):
"""Returns all nodes that share the same parent (incoming node) with
the given node, including the node itself.
"""
incoming = self.incoming(nod... |
Remove a node and its edges.
def remove(self, nodes):
"""Remove a node and its edges."""
nodes = nodes if isinstance(nodes, list) else [nodes]
for node in nodes:
k = self.id(node)
self.edges = list(filter(lambda e: e[0] != k and e[1] != k, self.edges))
del se... |
Replace nodes with node. Edges incoming to nodes[0] are connected to
the new node, and nodes outgoing from nodes[-1] become outgoing from
the new node.
def replace(self, nodes, node):
"""Replace nodes with node. Edges incoming to nodes[0] are connected to
the new node, and nodes outgoin... |
Searches the graph for a sub-graph that matches the given pattern
and returns the first match it finds.
def search(self, pattern):
"""Searches the graph for a sub-graph that matches the given pattern
and returns the first match it finds.
"""
for node in self.nodes.values():
... |
Make up an ID for a sequence (list) of nodes.
Note: `getrandbits()` is very uninformative as a "readable" ID. Here, we build a name
such that when the mouse hovers over the drawn node in Jupyter, one can figure out
which original nodes make up the sequence. This is actually quite useful.
def se... |
Generate a GraphViz Dot graph.
Returns a GraphViz Digraph object.
def build_dot(self):
"""Generate a GraphViz Dot graph.
Returns a GraphViz Digraph object.
"""
from graphviz import Digraph
# Build GraphViz Digraph
dot = Digraph()
dot.attr("graph",
... |
Standardize data types. Converts PyTorch tensors to Numpy arrays,
and Numpy scalars to Python scalars.
def to_data(value):
"""Standardize data types. Converts PyTorch tensors to Numpy arrays,
and Numpy scalars to Python scalars."""
# TODO: Use get_framework() for better detection.
if value.__class_... |
Like print(), but recognizes tensors and arrays and show
more details about them.
Example:
hl.write("My Tensor", my_tensor)
Prints:
My Tensor float32 (10, 3, 224, 224) min: 0.0 max: 1.0
def write(*args):
"""Like print(), but recognizes tensors and arrays and show
mo... |
Normalize an image to [0, 1] range.
def norm(image):
"""Normalize an image to [0, 1] range."""
min_value = image.min()
max_value = image.max()
if min_value == max_value:
return image - min_value
return (image - min_value) / (max_value - min_value) |
images: A list of images. I can be either:
- A list of Numpy arrays. Each array represents an image.
- A list of lists of Numpy arrays. In this case, the images in
the inner lists are concatentated to make one image.
def show_images(images, titles=None, cols=5, **kwargs):
"""
images: ... |
Inserts a text summary at the top that lists the number of steps and total
training time.
def draw_summary(self, history, title=""):
"""Inserts a text summary at the top that lists the number of steps and total
training time."""
# Generate summary string
time_str = str(history.g... |
metrics: One or more metrics parameters. Each represents the history
of one metric.
def draw_plot(self, metrics, labels=None, ylabel=""):
"""
metrics: One or more metrics parameters. Each represents the history
of one metric.
"""
metrics = metrics if isinstance(m... |
Display a series of images at different time steps.
def draw_image(self, metric, limit=5):
"""Display a series of images at different time steps."""
rows = 1
cols = limit
self.ax.axis("off")
# Take the Axes gridspec and divide it into a grid
gs = matplotlib.gridspec.Grid... |
Draw a series of histograms of the selected keys over different
training steps.
def draw_hist(self, metric, title=""):
"""Draw a series of histograms of the selected keys over different
training steps.
"""
# TODO: assert isinstance(list(values.values())[0], np.ndarray)
... |
Load the data in memory.
Args:
dataset: string in ['train', 'test']
def _load(self, dataset='train'):
"""Load the data in memory.
Args:
dataset: string in ['train', 'test']
"""
data, labels = None, None
if dataset is 'train':
files = [... |
Build a simple convnet (BN before ReLU).
Args:
inputs: a tensor of size [batch_size, height, width, channels]
mode: string in ['train', 'test']
Returns:
the last op containing the predictions
Note:
Best score
Step: 7015 - Epoch: 18/20 ... |
Download and extract the tarball from Alex Krizhevsky's website.
def maybe_download_and_extract(self):
"""Download and extract the tarball from Alex Krizhevsky's website."""
if not os.path.exists(self.cifar10_dir):
if not os.path.exists(self.data_dir):
os.makedirs(self.data... |
List all the nodes in a PyTorch graph.
def dump_pytorch_graph(graph):
"""List all the nodes in a PyTorch graph."""
f = "{:25} {:40} {} -> {}"
print(f.format("kind", "scopeName", "inputs", "outputs"))
for node in graph.nodes():
print(f.format(node.kind(), node.scopeName(),
... |
Returns a unique ID for a node.
def pytorch_id(node):
"""Returns a unique ID for a node."""
# After ONNX simplification, the scopeName is not unique anymore
# so append node outputs to guarantee uniqueness
return node.scopeName() + "/outputs/" + "/".join([o.uniqueName() for o in node.outputs()]) |
Return the output shape of the given Pytorch node.
def get_shape(torch_node):
"""Return the output shape of the given Pytorch node."""
# Extract node output shape from the node string representation
# This is a hack because there doesn't seem to be an official way to do it.
# See my quesiton in the PyT... |
List all the nodes in a TF graph.
tfgraph: A TF Graph object.
tfgraphdef: A TF GraphDef object.
def dump_tf_graph(tfgraph, tfgraphdef):
"""List all the nodes in a TF graph.
tfgraph: A TF Graph object.
tfgraphdef: A TF GraphDef object.
"""
print("Nodes ({})".format(len(tfgraphdef.node)))
... |
Convert TF graph to directed graph
tfgraph: A TF Graph object.
output: Name of the output node (string).
verbose: Set to True for debug print output
def import_graph(hl_graph, tf_graph, output=None, verbose=False):
"""Convert TF graph to directed graph
tfgraph: A TF Graph object.
output: Name o... |
Send FetchRequests for all assigned partitions that do not already have
an in-flight fetch or pending fetch data.
Returns:
List of Futures: each future resolves to a FetchResponse
def send_fetches(self):
"""Send FetchRequests for all assigned partitions that do not already have
... |
Lookup and set offsets for any partitions which are awaiting an
explicit reset.
Arguments:
partitions (set of TopicPartitions): the partitions to reset
def reset_offsets_if_needed(self, partitions):
"""Lookup and set offsets for any partitions which are awaiting an
explicit... |
Update the fetch positions for the provided partitions.
Arguments:
partitions (list of TopicPartitions): partitions to update
Raises:
NoOffsetForPartitionError: if no offset is stored for a given
partition and no reset policy is available
def update_fetch_posit... |
Reset offsets for the given partition using the offset reset strategy.
Arguments:
partition (TopicPartition): the partition that needs reset offset
Raises:
NoOffsetForPartitionError: if no offset reset strategy is defined
def _reset_offset(self, partition):
"""Reset of... |
Fetch offset for each partition passed in ``timestamps`` map.
Blocks until offsets are obtained, a non-retriable exception is raised
or ``timeout_ms`` passed.
Arguments:
timestamps: {TopicPartition: int} dict with timestamps to fetch
offsets by. -1 for the latest av... |
Returns previously fetched records and updates consumed offsets.
Arguments:
max_records (int): Maximum number of records returned. Defaults
to max_poll_records configuration.
Raises:
OffsetOutOfRangeError: if no subscription offset_reset_strategy
Cor... |
Iterate over fetched_records
def _message_generator(self):
"""Iterate over fetched_records"""
while self._next_partition_records or self._completed_fetches:
if not self._next_partition_records:
completion = self._completed_fetches.popleft()
self._next_partit... |
Fetch offsets for each partition in timestamps dict. This may send
request to multiple nodes, based on who is Leader for partition.
Arguments:
timestamps (dict): {TopicPartition: int} mapping of fetching
timestamps.
Returns:
Future: resolves to a mapping... |
Callback for the response of the list offset call above.
Arguments:
future (Future): the future to update based on response
response (OffsetResponse): response from the server
Raises:
AssertionError: if response does not match partition
def _handle_offset_response(... |
Create fetch requests for all assigned partitions, grouped by node.
FetchRequests skipped if no leader, or node has requests in flight
Returns:
dict: {node_id: FetchRequest, ...} (version depends on api_version)
def _create_fetch_requests(self):
"""Create fetch requests for all as... |
The callback for fetch completion
def _handle_fetch_response(self, request, send_time, response):
"""The callback for fetch completion"""
fetch_offsets = {}
for topic, partitions in request.topics:
for partition_data in partitions:
partition, offset = partition_data[... |
After each partition is parsed, we update the current metric totals
with the total bytes and number of records parsed. After all partitions
have reported, we write the metric.
def record(self, partition, num_bytes, num_records):
"""
After each partition is parsed, we update the current ... |
Check if we have to commit based on number of messages and commit
def _auto_commit(self):
"""
Check if we have to commit based on number of messages and commit
"""
# Check if we are supposed to do an auto-commit
if not self.auto_commit or self.auto_commit_every_n is None:
... |
Gets the pending message count
Keyword Arguments:
partitions (list): list of partitions to check for, default is to check all
def pending(self, partitions=None):
"""
Gets the pending message count
Keyword Arguments:
partitions (list): list of partitions to chec... |
Pure-python Murmur2 implementation.
Based on java client, see org.apache.kafka.common.utils.Utils.murmur2
Args:
data (bytes): opaque bytes
Returns: MurmurHash2 of data
def murmur2(data):
"""Pure-python Murmur2 implementation.
Based on java client, see org.apache.kafka.common.utils.Utils... |
Fetch the specified number of messages
Keyword Arguments:
count: Indicates the maximum number of messages to be fetched
block: If True, the API will block till all messages are fetched.
If block is a positive integer the API will block until that
many mes... |
Set the high-water mark in the current context.
In order to know the current partition, it is helpful to initialize
the consumer to provide partition info via:
.. code:: python
consumer.provide_partition_info()
def mark(self, partition, offset):
"""
Set the high-w... |
Commit this context's offsets:
- If the high-water mark has moved, commit up to and position the
consumer at the high-water mark.
- Otherwise, reset to the consumer to the initial offsets.
def commit(self):
"""
Commit this context's offsets:
- If the high-wat... |
Rollback this context:
- Position the consumer at the initial offsets.
def rollback(self):
"""
Rollback this context:
- Position the consumer at the initial offsets.
"""
self.logger.info("Rolling back context: %s", self.initial_offsets)
self.update_consumer... |
Commit explicit partition/offset pairs.
def commit_partition_offsets(self, partition_offsets):
"""
Commit explicit partition/offset pairs.
"""
self.logger.debug("Committing partition offsets: %s", partition_offsets)
commit_requests = [
OffsetCommitRequestPayload(sel... |
Update consumer offsets to explicit positions.
def update_consumer_offsets(self, partition_offsets):
"""
Update consumer offsets to explicit positions.
"""
self.logger.debug("Updating consumer offsets to: %s", partition_offsets)
for partition, offset in partition_offsets.items(... |
Get the current coordinator
Returns: the current coordinator id or None if it is unknown
def coordinator(self):
"""Get the current coordinator
Returns: the current coordinator id or None if it is unknown
"""
if self.coordinator_id is None:
return None
elif ... |
Block until the coordinator for this group is known
(and we have an active connection -- java client uses unsent queue).
def ensure_coordinator_ready(self):
"""Block until the coordinator for this group is known
(and we have an active connection -- java client uses unsent queue).
"""
... |
Check the status of the heartbeat thread (if it is active) and indicate
the liveness of the client. This must be called periodically after
joining with :meth:`.ensure_active_group` to ensure that the member stays
in the group. If an interval of time longer than the provided rebalance
tim... |
Ensure that the group is active (i.e. joined and synced)
def ensure_active_group(self):
"""Ensure that the group is active (i.e. joined and synced)"""
with self._client._lock, self._lock:
if self._heartbeat_thread is None:
self._start_heartbeat_thread()
while se... |
Join the group and return the assignment for the next generation.
This function handles both JoinGroup and SyncGroup, delegating to
:meth:`._perform_assignment` if elected leader by the coordinator.
Returns:
Future: resolves to the encoded-bytes assignment returned from the
... |
Perform leader synchronization and send back the assignment
for the group via SyncGroupRequest
Arguments:
response (JoinResponse): broker response to parse
Returns:
Future: resolves to member assignment encoded-bytes
def _on_join_leader(self, response):
"""
... |
Discover the current coordinator for the group.
Returns:
Future: resolves to the node id of the coordinator
def _send_group_coordinator_request(self):
"""Discover the current coordinator for the group.
Returns:
Future: resolves to the node id of the coordinator
... |
Mark the current coordinator as dead.
def coordinator_dead(self, error):
"""Mark the current coordinator as dead."""
if self.coordinator_id is not None:
log.warning("Marking the coordinator dead (node %s) for group %s: %s.",
self.coordinator_id, self.group_id, error)... |
Get the current generation state if the group is stable.
Returns: the current generation or None if the group is unjoined/rebalancing
def generation(self):
"""Get the current generation state if the group is stable.
Returns: the current generation or None if the group is unjoined/rebalancing
... |
Reset the generation and memberId because we have fallen out of the group.
def reset_generation(self):
"""Reset the generation and memberId because we have fallen out of the group."""
with self._lock:
self._generation = Generation.NO_GENERATION
self.rejoin_needed = True
... |
Leave the current group and reset local generation/memberId.
def maybe_leave_group(self):
"""Leave the current group and reset local generation/memberId."""
with self._client._lock, self._lock:
if (not self.coordinator_unknown()
and self.state is not MemberState.UNJOINED
... |
Send a heartbeat request
def _send_heartbeat_request(self):
"""Send a heartbeat request"""
if self.coordinator_unknown():
e = Errors.GroupCoordinatorNotAvailableError(self.coordinator_id)
return Future().failure(e)
elif not self._client.ready(self.coordinator_id, metada... |
Create a MetricName with the given name, group, description and tags,
plus default tags specified in the metric configuration.
Tag in tags takes precedence if the same tag key is specified in
the default metric configuration.
Arguments:
name (str): The name of the metric
... |
Get or create a sensor with the given unique name and zero or
more parent sensors. All parent sensors will receive every value
recorded with this sensor.
Arguments:
name (str): The name of the sensor
config (MetricConfig, optional): A default configuration to use
... |
Remove a sensor (if it exists), associated metrics and its children.
Arguments:
name (str): The name of the sensor to be removed
def remove_sensor(self, name):
"""
Remove a sensor (if it exists), associated metrics and its children.
Arguments:
name (str): The n... |
Add a metric to monitor an object that implements measurable.
This metric won't be associated with any sensor.
This is a way to expose existing values as metrics.
Arguments:
metricName (MetricName): The name of the metric
measurable (AbstractMeasurable): The measurable t... |
Remove a metric if it exists and return it. Return None otherwise.
If a metric is removed, `metric_removal` will be invoked
for each reporter.
Arguments:
metric_name (MetricName): The name of the metric
Returns:
KafkaMetric: the removed `KafkaMetric` or None if ... |
Add a MetricReporter
def add_reporter(self, reporter):
"""Add a MetricReporter"""
with self._lock:
reporter.init(list(self.metrics.values()))
self._reporters.append(reporter) |
Close this metrics repository.
def close(self):
"""Close this metrics repository."""
for reporter in self._reporters:
reporter.close()
self._metrics.clear() |
Construct a Snappy Message containing multiple Messages
The given payloads will be encoded, compressed, and sent as a single atomic
message to Kafka.
Arguments:
payloads: list(bytes), a list of payload to send be sent to Kafka
key: bytes, a key used for partition routing (optional)
def cr... |
Create a message set using the given codec.
If codec is CODEC_NONE, return a list of raw Kafka messages. Otherwise,
return a list containing a single codec-encoded message.
def create_message_set(messages, codec=CODEC_NONE, key=None, compresslevel=None):
"""Create a message set using the given codec.
... |
Encode the common request envelope
def _encode_message_header(cls, client_id, correlation_id, request_key,
version=0):
"""
Encode the common request envelope
"""
return struct.pack('>hhih%ds' % len(client_id),
request_key, ... |
Encode a MessageSet. Unlike other arrays in the protocol,
MessageSets are not length-prefixed
Format
======
MessageSet => [Offset MessageSize Message]
Offset => int64
MessageSize => int32
def _encode_message_set(cls, messages):
"""
Encode a MessageSe... |
Encode a ProduceRequest struct
Arguments:
payloads: list of ProduceRequestPayload
acks: How "acky" you want the request to be
1: written to disk by the leader
0: immediate response
-1: waits for all replicas to be in sync
timeo... |
Decode ProduceResponse to ProduceResponsePayload
Arguments:
response: ProduceResponse
Return: list of ProduceResponsePayload
def decode_produce_response(cls, response):
"""
Decode ProduceResponse to ProduceResponsePayload
Arguments:
response: ProduceRe... |
Encodes a FetchRequest struct
Arguments:
payloads: list of FetchRequestPayload
max_wait_time (int, optional): ms to block waiting for min_bytes
data. Defaults to 100.
min_bytes (int, optional): minimum bytes required to return before
max_wait_... |
Decode FetchResponse struct to FetchResponsePayloads
Arguments:
response: FetchResponse
def decode_fetch_response(cls, response):
"""
Decode FetchResponse struct to FetchResponsePayloads
Arguments:
response: FetchResponse
"""
return [
... |
Decode OffsetResponse into OffsetResponsePayloads
Arguments:
response: OffsetResponse
Returns: list of OffsetResponsePayloads
def decode_offset_response(cls, response):
"""
Decode OffsetResponse into OffsetResponsePayloads
Arguments:
response: OffsetRe... |
Decode OffsetResponse_v2 into ListOffsetResponsePayloads
Arguments:
response: OffsetResponse_v2
Returns: list of ListOffsetResponsePayloads
def decode_list_offset_response(cls, response):
"""
Decode OffsetResponse_v2 into ListOffsetResponsePayloads
Arguments:
... |
Encode a MetadataRequest
Arguments:
topics: list of strings
def encode_metadata_request(cls, topics=(), payloads=None):
"""
Encode a MetadataRequest
Arguments:
topics: list of strings
"""
if payloads is not None:
topics = payloads
... |
Encode a ConsumerMetadataRequest
Arguments:
client_id: string
correlation_id: int
payloads: string (consumer group)
def encode_consumer_metadata_request(cls, client_id, correlation_id, payloads):
"""
Encode a ConsumerMetadataRequest
Arguments:
... |
Decode bytes to a kafka.structs.ConsumerMetadataResponse
Arguments:
data: bytes to decode
def decode_consumer_metadata_response(cls, data):
"""
Decode bytes to a kafka.structs.ConsumerMetadataResponse
Arguments:
data: bytes to decode
"""
((corre... |
Encode an OffsetCommitRequest struct
Arguments:
group: string, the consumer group you are committing offsets for
payloads: list of OffsetCommitRequestPayload
def encode_offset_commit_request(cls, group, payloads):
"""
Encode an OffsetCommitRequest struct
Argume... |
Decode OffsetCommitResponse to an OffsetCommitResponsePayload
Arguments:
response: OffsetCommitResponse
def decode_offset_commit_response(cls, response):
"""
Decode OffsetCommitResponse to an OffsetCommitResponsePayload
Arguments:
response: OffsetCommitResponse... |
Encode an OffsetFetchRequest struct. The request is encoded using
version 0 if from_kafka is false, indicating a request for Zookeeper
offsets. It is encoded using version 1 otherwise, indicating a request
for Kafka offsets.
Arguments:
group: string, the consumer group you a... |
Decode OffsetFetchResponse to OffsetFetchResponsePayloads
Arguments:
response: OffsetFetchResponse
def decode_offset_fetch_response(cls, response):
"""
Decode OffsetFetchResponse to OffsetFetchResponsePayloads
Arguments:
response: OffsetFetchResponse
""... |
Return a file descriptor from a file object.
This wraps _fileobj_to_fd() to do an exhaustive search in case
the object is invalid but we still have it in our map. This
is used by unregister() so we can unregister an object that
was previously registered even if it is closed. It is als... |
Validate that this sensor doesn't end up referencing itself.
def _check_forest(self, sensors):
"""Validate that this sensor doesn't end up referencing itself."""
if self in sensors:
raise ValueError('Circular dependency in sensors: %s is its own'
'parent.' % (se... |
Record a value at a known time.
Arguments:
value (double): The value we are recording
time_ms (int): A POSIX timestamp in milliseconds.
Default: The time when record() is evaluated (now)
Raises:
QuotaViolationException: if recording this value moves a... |
Check if we have violated our quota for any metric that
has a configured quota
def _check_quotas(self, time_ms):
"""
Check if we have violated our quota for any metric that
has a configured quota
"""
for metric in self._metrics:
if metric.config and metric.co... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.