text stringlengths 81 112k |
|---|
Get BrokerMetadata
Arguments:
broker_id (int): node_id for a broker to check
Returns:
BrokerMetadata or None if not found
def broker_metadata(self, broker_id):
"""Get BrokerMetadata
Arguments:
broker_id (int): node_id for a broker to check
... |
Return set of all partitions for topic (whether available or not)
Arguments:
topic (str): topic to check for partitions
Returns:
set: {partition (int), ...}
def partitions_for_topic(self, topic):
"""Return set of all partitions for topic (whether available or not)
... |
Return set of partitions with known leaders
Arguments:
topic (str): topic to check for partitions
Returns:
set: {partition (int), ...}
None if topic not found.
def available_partitions_for_topic(self, topic):
"""Return set of partitions with known leaders
... |
Return node_id of leader, -1 unavailable, None if unknown.
def leader_for_partition(self, partition):
"""Return node_id of leader, -1 unavailable, None if unknown."""
if partition.topic not in self._partitions:
return None
elif partition.partition not in self._partitions[partition.t... |
Milliseconds until metadata should be refreshed
def ttl(self):
"""Milliseconds until metadata should be refreshed"""
now = time.time() * 1000
if self._need_update:
ttl = 0
else:
metadata_age = now - self._last_successful_refresh_ms
ttl = self.config['... |
Flags metadata for update, return Future()
Actual update must be handled separately. This method will only
change the reported ttl()
Returns:
kafka.future.Future (value will be the cluster object after update)
def request_update(self):
"""Flags metadata for update, return ... |
Get set of known topics.
Arguments:
exclude_internal_topics (bool): Whether records from internal topics
(such as offsets) should be exposed to the consumer. If set to
True the only way to receive records from an internal topic is
subscribing to it. D... |
Update cluster state given a failed MetadataRequest.
def failed_update(self, exception):
"""Update cluster state given a failed MetadataRequest."""
f = None
with self._lock:
if self._future:
f = self._future
self._future = None
if f:
... |
Update cluster state given a MetadataResponse.
Arguments:
metadata (MetadataResponse): broker response to a metadata request
Returns: None
def update_metadata(self, metadata):
"""Update cluster state given a MetadataResponse.
Arguments:
metadata (MetadataRespo... |
Update with metadata for a group coordinator
Arguments:
group (str): name of group from GroupCoordinatorRequest
response (GroupCoordinatorResponse): broker response
Returns:
bool: True if metadata is updated, False on error
def add_group_coordinator(self, group, re... |
Returns a copy of cluster metadata with partitions added
def with_partitions(self, partitions_to_add):
"""Returns a copy of cluster metadata with partitions added"""
new_metadata = ClusterMetadata(**self.config)
new_metadata._brokers = copy.deepcopy(self._brokers)
new_metadata._partitio... |
Manually assign a list of TopicPartitions to this consumer.
Arguments:
partitions (list of TopicPartition): Assignment for this instance.
Raises:
IllegalStateError: If consumer has already called
:meth:`~kafka.KafkaConsumer.subscribe`.
Warning:
... |
Close the consumer, waiting indefinitely for any needed cleanup.
Keyword Arguments:
autocommit (bool): If auto-commit is configured for this consumer,
this optional flag causes the consumer to attempt to commit any
pending consumed offsets prior to close. Default: Tr... |
Commit offsets to kafka asynchronously, optionally firing callback.
This commits offsets only to Kafka. The offsets committed using this API
will be used on the first fetch after every rebalance and also on
startup. As such, if you need to store offsets in anything other than
Kafka, thi... |
Commit offsets to kafka, blocking until success or error.
This commits offsets only to Kafka. The offsets committed using this API
will be used on the first fetch after every rebalance and also on
startup. As such, if you need to store offsets in anything other than
Kafka, this API shou... |
Get the last committed offset for the given partition.
This offset will be used as the position for the consumer
in the event of a failure.
This call may block to do a remote call if the partition in question
isn't assigned to this consumer or if the consumer hasn't yet
initial... |
Get all topics the user is authorized to view.
Returns:
set: topics
def topics(self):
"""Get all topics the user is authorized to view.
Returns:
set: topics
"""
cluster = self._client.cluster
if self._client._metadata_refresh_in_progress and sel... |
Fetch data from assigned topics / partitions.
Records are fetched and returned in batches by topic-partition.
On each poll, consumer will try to use the last consumed offset as the
starting offset and fetch sequentially. The last consumed offset can be
manually set through :meth:`~kafka... |
Do one round of polling. In addition to checking for new data, this does
any needed heart-beating, auto-commits, and offset updates.
Arguments:
timeout_ms (int): The maximum time in milliseconds to block.
Returns:
dict: Map of topic to list of records (may be empty).
d... |
Get the offset of the next record that will be fetched
Arguments:
partition (TopicPartition): Partition to check
Returns:
int: Offset
def position(self, partition):
"""Get the offset of the next record that will be fetched
Arguments:
partition (Top... |
Last known highwater offset for a partition.
A highwater offset is the offset that will be assigned to the next
message that is produced. It may be useful for calculating lag, by
comparing with the reported position. Note that both position and
highwater refer to the *next* offset -- i.... |
Suspend fetching from the requested partitions.
Future calls to :meth:`~kafka.KafkaConsumer.poll` will not return any
records from these partitions until they have been resumed using
:meth:`~kafka.KafkaConsumer.resume`.
Note: This method does not affect partition subscription. In parti... |
Manually specify the fetch offset for a TopicPartition.
Overrides the fetch offsets that the consumer will use on the next
:meth:`~kafka.KafkaConsumer.poll`. If this API is invoked for the same
partition more than once, the latest offset will be used on the next
:meth:`~kafka.KafkaConsu... |
Seek to the oldest available offset for partitions.
Arguments:
*partitions: Optionally provide specific TopicPartitions, otherwise
default to all assigned partitions.
Raises:
AssertionError: If any partition is not currently assigned, or if
no pa... |
Seek to the most recent available offset for partitions.
Arguments:
*partitions: Optionally provide specific TopicPartitions, otherwise
default to all assigned partitions.
Raises:
AssertionError: If any partition is not currently assigned, or if
... |
Subscribe to a list of topics, or a topic regex pattern.
Partitions will be dynamically assigned via a group coordinator.
Topic subscriptions are not incremental: this list will replace the
current assignment (if there is one).
This method is incompatible with :meth:`~kafka.KafkaConsum... |
Unsubscribe from all topics and clear all assigned partitions.
def unsubscribe(self):
"""Unsubscribe from all topics and clear all assigned partitions."""
self._subscription.unsubscribe()
self._coordinator.close()
self._client.cluster.need_all_topic_metadata = False
self._client... |
Look up the offsets for the given partitions by timestamp. The
returned offset for each partition is the earliest offset whose
timestamp is greater than or equal to the given timestamp in the
corresponding partition.
This is a blocking call. The consumer does not have to be assigned the... |
Get the first offset for the given partitions.
This method does not change the current consumer position of the
partitions.
Note:
This method may block indefinitely if the partition does not exist.
Arguments:
partitions (list): List of TopicPartition instances ... |
Get the last offset for the given partitions. The last offset of a
partition is the offset of the upcoming message, i.e. the offset of the
last available message + 1.
This method does not change the current consumer position of the
partitions.
Note:
This method may ... |
Return True iff this consumer can/should join a broker-coordinated group.
def _use_consumer_group(self):
"""Return True iff this consumer can/should join a broker-coordinated group."""
if self.config['api_version'] < (0, 9):
return False
elif self.config['group_id'] is None:
... |
Set the fetch position to the committed position (if there is one)
or reset it using the offset reset policy the user has configured.
Arguments:
partitions (List[TopicPartition]): The partitions that need
updating fetch positions.
Raises:
NoOffsetForPart... |
Return a nested dictionary snapshot of all metrics and their
values at this time. Example:
{
'category': {
'metric1_name': 42.0,
'metric2_name': 'foo'
}
}
def snapshot(self):
"""
Return a nested dictionary snapshot of all m... |
Return a string category for the metric.
The category is made up of this reporter's prefix and the
metric's group and tags.
Examples:
prefix = 'foo', group = 'bar', tags = {'a': 1, 'b': 2}
returns: 'foo.bar.a=1,b=2'
prefix = 'foo', group = 'bar', tags = Non... |
Queues a node for asynchronous connection during the next .poll()
def maybe_connect(self, node_id, wakeup=True):
"""Queues a node for asynchronous connection during the next .poll()"""
if self._can_connect(node_id):
self._connecting.add(node_id)
# Wakeup signal is useful in case... |
Idempotent non-blocking connection attempt to the given node id.
def _maybe_connect(self, node_id):
"""Idempotent non-blocking connection attempt to the given node id."""
with self._lock:
conn = self._conns.get(node_id)
if conn is None:
broker = self.cluster.bro... |
Check whether a node is connected and ok to send more requests.
Arguments:
node_id (int): the id of the node to check
metadata_priority (bool): Mark node as not-ready if a metadata
refresh is required. Default: True
Returns:
bool: True if we are read... |
Return True iff the node_id is connected.
def connected(self, node_id):
"""Return True iff the node_id is connected."""
conn = self._conns.get(node_id)
if conn is None:
return False
return conn.connected() |
Close one or all broker connections.
Arguments:
node_id (int, optional): the id of the node to close
def close(self, node_id=None):
"""Close one or all broker connections.
Arguments:
node_id (int, optional): the id of the node to close
"""
with self._lo... |
Check whether the node connection has been disconnected or failed.
A disconnected node has either been closed or has failed. Connection
failures are usually transient and can be resumed in the next ready()
call, but there are cases where transient failures need to be caught
and re-acted... |
Return the number of milliseconds to wait, based on the connection
state, before attempting to send data. When disconnected, this respects
the reconnect backoff time. When connecting, returns 0 to allow
non-blocking connect to finish. When connected, returns a very large
number to handle... |
Check whether a node is ready to send more requests.
In addition to connection-level checks, this method also is used to
block additional requests from being sent during a metadata refresh.
Arguments:
node_id (int): id of the node to check
metadata_priority (bool): Mark... |
Send a request to a specific node. Bytes are placed on an
internal per-connection send-queue. Actual network I/O will be
triggered in a subsequent call to .poll()
Arguments:
node_id (int): destination node
request (Struct): request object (not-encoded)
wakeup... |
Try to read and write to sockets.
This method will also attempt to complete node connections, refresh
stale metadata, and run previously-scheduled tasks.
Arguments:
timeout_ms (int, optional): maximum amount of time to wait (in ms)
for at least one response. Must be... |
Get the number of in-flight requests for a node or all nodes.
Arguments:
node_id (int, optional): a specific node to check. If unspecified,
return the total for all nodes
Returns:
int: pending in-flight requests for the node, or all nodes if None
def in_flight_... |
Choose the node with fewest outstanding requests, with fallbacks.
This method will prefer a node with an existing connection and no
in-flight-requests. If no such node is found, a node will be chosen
randomly from disconnected nodes that are not "blacked out" (i.e.,
are not subject to a... |
Set specific topics to track for metadata.
Arguments:
topics (list of str): topics to check for metadata
Returns:
Future: resolves after metadata request/response
def set_topics(self, topics):
"""Set specific topics to track for metadata.
Arguments:
... |
Add a topic to the list of topics tracked via metadata.
Arguments:
topic (str): topic to track
Returns:
Future: resolves after metadata request/response
def add_topic(self, topic):
"""Add a topic to the list of topics tracked via metadata.
Arguments:
... |
Send a metadata request if needed.
Returns:
int: milliseconds until next refresh
def _maybe_refresh_metadata(self, wakeup=False):
"""Send a metadata request if needed.
Returns:
int: milliseconds until next refresh
"""
ttl = self.cluster.ttl()
wa... |
Attempt to guess the version of a Kafka broker.
Note: It is possible that this method blocks longer than the
specified timeout. This can happen if the entire cluster
is down and the client enters a bootstrap backoff sleep.
This is only possible if node_id is None.
R... |
Here we check the total amount of time elapsed since the oldest
non-obsolete window. This give the total window_size of the batch
which is the time used for Rate computation. However, there is
an issue if we do not have sufficient data for e.g. if only
1 second has elapsed in a 30 second... |
Helper method to send produce requests.
Note that msg type *must* be encoded to bytes by user. Passing unicode
message will not work, for example you should encode before calling
send_messages via something like `unicode_message.encode('utf-8')`
All messages will set the message 'key' t... |
Stop the producer (async mode). Blocks until async thread completes.
def stop(self, timeout=None):
"""
Stop the producer (async mode). Blocks until async thread completes.
"""
if timeout is not None:
log.warning('timeout argument to stop() is deprecated - '
... |
Append message to batch.
def append(self, offset, timestamp, key, value, headers=None):
""" Append message to batch.
"""
assert not headers, "Headers not supported in v0/v1"
# Check types
if type(offset) != int:
raise TypeError(offset)
if self._magic == 0:
... |
Encode msg data into the `msg_buffer`, which should be allocated
to at least the size of this message.
def _encode_msg(self, start_pos, offset, timestamp, key, value,
attributes=0):
""" Encode msg data into the `msg_buffer`, which should be allocated
to at least the ... |
Actual size of message to add
def size_in_bytes(self, offset, timestamp, key, value, headers=None):
""" Actual size of message to add
"""
assert not headers, "Headers not supported in v0/v1"
magic = self._magic
return self.LOG_OVERHEAD + self.record_size(magic, key, value) |
Upper bound estimate of record size.
def estimate_size_in_bytes(cls, magic, compression_type, key, value):
""" Upper bound estimate of record size.
"""
assert magic in [0, 1], "Not supported magic"
# In case of compression we may need another overhead for inner msg
if compressio... |
Allocate a buffer of the given size. This method blocks if there is not
enough memory and the buffer pool is configured with blocking mode.
Arguments:
size (int): The buffer size to allocate in bytes [ignored]
max_time_to_block_ms (int): The maximum time in milliseconds to
... |
Return buffers to the pool. If they are of the poolable size add them
to the free list, otherwise just mark the memory as free.
Arguments:
buffer_ (io.BytesIO): The buffer to return
def deallocate(self, buf):
"""
Return buffers to the pool. If they are of the poolable size ... |
Compressed messages should pass in bytes_to_read (via message size)
otherwise, we decode from data as Int32
def decode(cls, data, bytes_to_read=None):
"""Compressed messages should pass in bytes_to_read (via message size)
otherwise, we decode from data as Int32
"""
if isinstance... |
Encode an integer to a varint presentation. See
https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints
on how those can be produced.
Arguments:
value (int): Value to encode
write (function): Called per byte that needs to be writen
Returns:
... |
Number of bytes needed to encode an integer in variable-length format.
def size_of_varint(value):
""" Number of bytes needed to encode an integer in variable-length format.
"""
value = (value << 1) ^ (value >> 63)
if value <= 0x7f:
return 1
if value <= 0x3fff:
return 2
if value ... |
Decode an integer from a varint presentation. See
https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints
on how those can be produced.
Arguments:
buffer (bytearry): buffer to read from.
pos (int): optional position to read from
Returns:
(... |
Encode and queue a kafka api request for sending.
Arguments:
request (object): An un-encoded kafka request.
correlation_id (int, optional): Optionally specify an ID to
correlate requests with responses. If not provided, an ID will
be generated automatical... |
Retrieve all pending bytes to send on the network
def send_bytes(self):
"""Retrieve all pending bytes to send on the network"""
data = b''.join(self.bytes_to_send)
self.bytes_to_send = []
return data |
Process bytes received from the network.
Arguments:
data (bytes): any length bytes received from a network connection
to a kafka broker.
Returns:
responses (list of (correlation_id, response)): any/all completed
responses, decoded from bytes to p... |
Get or create a connection to a broker using host and port
def _get_conn(self, host, port, afi):
"""Get or create a connection to a broker using host and port"""
host_key = (host, port)
if host_key not in self._conns:
self._conns[host_key] = BrokerConnection(
host, p... |
Returns the leader for a partition or None if the partition exists
but has no leader.
Raises:
UnknownTopicOrPartitionError: If the topic or partition is not part
of the metadata.
LeaderNotAvailableError: If the server has metadata, but there is no
current... |
Returns the coordinator broker for a consumer group.
GroupCoordinatorNotAvailableError will be raised if the coordinator
does not currently exist for the group.
GroupLoadInProgressError is raised if the coordinator is available
but is still loading offsets from the internal topic
def ... |
Attempt to send a broker-agnostic request to one of the available
brokers. Keep trying until you succeed.
def _send_broker_unaware_request(self, payloads, encoder_fn, decoder_fn):
"""
Attempt to send a broker-agnostic request to one of the available
brokers. Keep trying until you succee... |
Group a list of request payloads by topic+partition and send them to
the leader broker for that partition using the supplied encode/decode
functions
Arguments:
payloads: list of object-like entities with a topic (str) and
partition (int) attribute; payloads with duplicate t... |
Send a list of requests to the consumer coordinator for the group
specified using the supplied encode/decode functions. As the payloads
that use consumer-aware requests do not contain the group (e.g.
OffsetFetchRequest), all payloads must be for a single group.
Arguments:
group... |
Create an inactive copy of the client object, suitable for passing
to a separate thread.
Note that the copied connections are not initialized, so :meth:`.reinit`
must be called on the returned copy.
def copy(self):
"""
Create an inactive copy of the client object, suitable for ... |
Fetch broker and topic-partition metadata from the server.
Updates internal data: broker list, topic/partition list, and
topic/partition -> broker map. This method should be called after
receiving any error.
Note: Exceptions *will not* be raised in a full refresh (i.e. no topic
... |
Encode and send some ProduceRequests
ProduceRequests will be grouped by (topic, partition) and then
sent to a specific broker. Output is a list of responses in the
same order as the list of payloads specified
Arguments:
payloads (list of ProduceRequest): produce requests to... |
Encode and send a FetchRequest
Payloads are grouped by topic and partition so they can be pipelined
to the same brokers.
def send_fetch_request(self, payloads=(), fail_on_error=True,
callback=None, max_wait_time=100, min_bytes=4096):
"""
Encode and send a Fet... |
Performs additional validation steps which were not performed when this
token was decoded. This method is part of the "public" API to indicate
the intention that it may be overridden in subclasses.
def verify(self):
"""
Performs additional validation steps which were not performed when... |
Ensures that the token type claim is present and has the correct value.
def verify_token_type(self):
"""
Ensures that the token type claim is present and has the correct value.
"""
try:
token_type = self.payload[api_settings.TOKEN_TYPE_CLAIM]
except KeyError:
... |
Updates the expiration time of a token.
def set_exp(self, claim='exp', from_time=None, lifetime=None):
"""
Updates the expiration time of a token.
"""
if from_time is None:
from_time = self.current_time
if lifetime is None:
lifetime = self.lifetime
... |
Checks whether a timestamp value in the given claim has passed (since
the given datetime value in `current_time`). Raises a TokenError with
a user-facing error message if so.
def check_exp(self, claim='exp', current_time=None):
"""
Checks whether a timestamp value in the given claim ha... |
Returns an authorization token for the given user that will be provided
after authenticating the user's credentials.
def for_user(cls, user):
"""
Returns an authorization token for the given user that will be provided
after authenticating the user's credentials.
"""
user... |
Returns an access token created from this refresh token. Copies all
claims present in this refresh token to the new access token except
those claims listed in the `no_copy_claims` attribute.
def access_token(self):
"""
Returns an access token created from this refresh token. Copies al... |
Returns an encoded token for the given payload dictionary.
def encode(self, payload):
"""
Returns an encoded token for the given payload dictionary.
"""
token = jwt.encode(payload, self.signing_key, algorithm=self.algorithm)
return token.decode('utf-8') |
Performs a validation of the given token and returns its payload
dictionary.
Raises a `TokenBackendError` if the token is malformed, if its
signature check fails, or if its 'exp' claim indicates it has expired.
def decode(self, token, verify=True):
"""
Performs a validation of ... |
Extracts the header containing the JSON web token from the given
request.
def get_header(self, request):
"""
Extracts the header containing the JSON web token from the given
request.
"""
header = request.META.get('HTTP_AUTHORIZATION')
if isinstance(header, str):... |
Extracts an unvalidated JSON web token from the given "Authorization"
header value.
def get_raw_token(self, header):
"""
Extracts an unvalidated JSON web token from the given "Authorization"
header value.
"""
parts = header.split()
if len(parts) == 0:
... |
Validates an encoded JSON web token and returns a validated token
wrapper object.
def get_validated_token(self, raw_token):
"""
Validates an encoded JSON web token and returns a validated token
wrapper object.
"""
messages = []
for AuthToken in api_settings.AUTH_... |
Attempts to find and return a user using the given validated token.
def get_user(self, validated_token):
"""
Attempts to find and return a user using the given validated token.
"""
try:
user_id = validated_token[api_settings.USER_ID_CLAIM]
except KeyError:
... |
Returns a stateless user object which is backed by the given validated
token.
def get_user(self, validated_token):
"""
Returns a stateless user object which is backed by the given validated
token.
"""
if api_settings.USER_ID_CLAIM not in validated_token:
# Th... |
Native width of this image, calculated from its width in pixels and
horizontal dots per inch (dpi).
def default_cx(self):
"""
Native width of this image, calculated from its width in pixels and
horizontal dots per inch (dpi).
"""
px_width = self.image.px_width
ho... |
Native height of this image, calculated from its height in pixels and
vertical dots per inch (dpi).
def default_cy(self):
"""
Native height of this image, calculated from its height in pixels and
vertical dots per inch (dpi).
"""
px_height = self.image.px_height
... |
Filename from which this image part was originally created. A generic
name, e.g. 'image.png', is substituted if no name is available, for
example when the image was loaded from an unnamed stream. In that
case a default extension is applied based on the detected MIME type
of the image.
d... |
Return an |ImagePart| instance newly created from *image* and
assigned *partname*.
def from_image(cls, image, partname):
"""
Return an |ImagePart| instance newly created from *image* and
assigned *partname*.
"""
return ImagePart(partname, image.content_type, image.blob, ... |
Called by ``docx.opc.package.PartFactory`` to load an image part from
a package being opened by ``Document(...)`` call.
def load(cls, partname, content_type, blob, package):
"""
Called by ``docx.opc.package.PartFactory`` to load an image part from
a package being opened by ``Document(..... |
Add a new tab stop at *position*, a |Length| object specifying the
location of the tab stop relative to the paragraph edge. A negative
*position* value is valid and appears in hanging indentation. Tab
alignment defaults to left, but may be specified by passing a member
of the :ref:`WdTab... |
Set val attribute of <w:rStyle> child element to *style*, adding a
new element if necessary. If *style* is |None|, remove the <w:rStyle>
element if present.
def style(self, style):
"""
Set val attribute of <w:rStyle> child element to *style*, adding a
new element if necessary. I... |
|True| if `w:vertAlign/@w:val` is 'subscript'. |False| if
`w:vertAlign/@w:val` contains any other value. |None| if
`w:vertAlign` is not present.
def subscript(self):
"""
|True| if `w:vertAlign/@w:val` is 'subscript'. |False| if
`w:vertAlign/@w:val` contains any other value. |Non... |
|True| if `w:vertAlign/@w:val` is 'superscript'. |False| if
`w:vertAlign/@w:val` contains any other value. |None| if
`w:vertAlign` is not present.
def superscript(self):
"""
|True| if `w:vertAlign/@w:val` is 'superscript'. |False| if
`w:vertAlign/@w:val` contains any other value... |
Return the value of the boolean child element having *name*, e.g.
'b', 'i', and 'smallCaps'.
def _get_bool_val(self, name):
"""
Return the value of the boolean child element having *name*, e.g.
'b', 'i', and 'smallCaps'.
"""
element = getattr(self, name)
if eleme... |
The underline type corresponding to the ``w:val`` attribute value.
def val(self):
"""
The underline type corresponding to the ``w:val`` attribute value.
"""
val = self.get(qn('w:val'))
underline = WD_UNDERLINE.from_xml(val)
if underline == WD_UNDERLINE.SINGLE:
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.