text
stringlengths
81
112k
Register a compound statistic with this sensor which yields multiple measurable quantities (like a histogram) Arguments: stat (AbstractCompoundStat): The stat to register config (MetricConfig): The configuration for this stat. If None then the stat will use the d...
Register a metric with this sensor Arguments: metric_name (MetricName): The name of the metric stat (AbstractMeasurableStat): The statistic to keep config (MetricConfig): A special configuration for this metric. If None use the sensor default configuration. ...
Returns list of preferred (protocols, metadata) def group_protocols(self): """Returns list of preferred (protocols, metadata)""" if self._subscription.subscription is None: raise Errors.IllegalStateError('Consumer has not subscribed to topics') # dpkp note: I really dislike this. ...
Poll for coordinator events. Only applicable if group_id is set, and broker version supports GroupCoordinators. This ensures that the coordinator is known, and if using automatic partition assignment, ensures that the consumer has joined the group. This also handles periodic offset commi...
Return seconds (float) remaining until :meth:`.poll` should be called again def time_to_next_poll(self): """Return seconds (float) remaining until :meth:`.poll` should be called again""" if not self.config['enable_auto_commit']: return self.time_to_next_heartbeat() if time.time() >...
Check whether the group should be rejoined Returns: bool: True if consumer should rejoin group, False otherwise def need_rejoin(self): """Check whether the group should be rejoined Returns: bool: True if consumer should rejoin group, False otherwise """ ...
Fetch committed offsets for assigned partitions. def refresh_committed_offsets_if_needed(self): """Fetch committed offsets for assigned partitions.""" if self._subscription.needs_fetch_committed_offsets: offsets = self.fetch_committed_offsets(self._subscription.assigned_partitions()) ...
Fetch the current committed offsets for specified partitions Arguments: partitions (list of TopicPartition): partitions to fetch Returns: dict: {TopicPartition: OffsetAndMetadata} def fetch_committed_offsets(self, partitions): """Fetch the current committed offsets for...
Close the coordinator, leave the current group, and reset local generation / member_id. Keyword Arguments: autocommit (bool): If auto-commit is configured for this consumer, this optional flag causes the consumer to attempt to commit any pending consumed offs...
Commit specific offsets asynchronously. Arguments: offsets (dict {TopicPartition: OffsetAndMetadata}): what to commit callback (callable, optional): called as callback(offsets, response) response will be either an Exception or a OffsetCommitResponse struc...
Commit specific offsets synchronously. This method will retry until the commit completes successfully or an unrecoverable error is encountered. Arguments: offsets (dict {TopicPartition: OffsetAndMetadata}): what to commit Raises error on failure def commit_offsets_sync(se...
Commit offsets for the specified list of topics and partitions. This is a non-blocking call which returns a request future that can be polled in the case of a synchronous commit or ignored in the asynchronous case. Arguments: offsets (dict of {TopicPartition: OffsetAndMetad...
Fetch the committed offsets for a set of partitions. This is a non-blocking call. The returned future can be polled to get the actual offsets returned from the broker. Arguments: partitions (list of TopicPartition): the partitions to fetch Returns: Future: reso...
Subscribe to a list of topics, or a topic regex pattern. Partitions will be dynamically assigned via a group coordinator. Topic subscriptions are not incremental: this list will replace the current assignment (if there is one). This method is incompatible with assign_from_user() ...
Ensures that the topic name is valid according to the kafka source. def _ensure_valid_topic_name(self, topic): """ Ensures that the topic name is valid according to the kafka source. """ # See Kafka Source: # https://github.com/apache/kafka/blob/39eb31feaeebfb184d98cc5d94da9148c2319d81/clients...
Change the topic subscription. Arguments: topics (list of str): topics for subscription Raises: IllegalStateErrror: if assign_from_user has been used already TypeError: if a topic is None or a non-str ValueError: if a topic is an empty string or ...
Add topics to the current group subscription. This is used by the group leader to ensure that it receives metadata updates for all topics that any member of the group is subscribed to. Arguments: topics (list of str): topics to add to the group subscription def group_subscribe(sel...
Reset the group's subscription to only contain topics subscribed by this consumer. def reset_group_subscription(self): """Reset the group's subscription to only contain topics subscribed by this consumer.""" if self._user_assignment: raise IllegalStateError(self._SUBSCRIPTION_EXCEPTION_MESS...
Manually assign a list of TopicPartitions to this consumer. This interface does not allow for incremental assignment and will replace the previous assignment (if there was one). Manual topic assignment through this method does not use the consumer's group management functionality. As s...
Update the assignment to the specified partitions This method is called by the coordinator to dynamically assign partitions based on the consumer's topic subscription. This is different from assign_from_user() which directly sets the assignment from a user-supplied TopicPartition list. ...
Clear all topic subscriptions and partition assignments def unsubscribe(self): """Clear all topic subscriptions and partition assignments""" self.subscription = None self._user_assignment.clear() self.assignment.clear() self.subscribed_pattern = None
Return current set of paused TopicPartitions. def paused_partitions(self): """Return current set of paused TopicPartitions.""" return set(partition for partition in self.assignment if self.is_paused(partition))
Return set of TopicPartitions that should be Fetched. def fetchable_partitions(self): """Return set of TopicPartitions that should be Fetched.""" fetchable = set() for partition, state in six.iteritems(self.assignment): if state.is_fetchable(): fetchable.add(partitio...
Returns consumed offsets as {TopicPartition: OffsetAndMetadata} def all_consumed_offsets(self): """Returns consumed offsets as {TopicPartition: OffsetAndMetadata}""" all_consumed = {} for partition, state in six.iteritems(self.assignment): if state.has_valid_position: ...
Mark partition for offset reset using specified or default strategy. Arguments: partition (TopicPartition): partition to mark offset_reset_strategy (OffsetResetStrategy, optional) def need_offset_reset(self, partition, offset_reset_strategy=None): """Mark partition for offset r...
Build a cleanup clojure that doesn't increase our ref count def _cleanup_factory(self): """Build a cleanup clojure that doesn't increase our ref count""" _self = weakref.proxy(self) def wrapper(): try: _self.close(timeout=0) except (ReferenceError, Attrib...
Close this producer. Arguments: timeout (float, optional): timeout in seconds to wait for completion. def close(self, timeout=None): """Close this producer. Arguments: timeout (float, optional): timeout in seconds to wait for completion. """ # drop our...
Returns set of all known partitions for the topic. def partitions_for(self, topic): """Returns set of all known partitions for the topic.""" max_wait = self.config['max_block_ms'] / 1000.0 return self._wait_on_metadata(topic, max_wait)
Publish a message to a topic. Arguments: topic (str): topic where the message will be published value (optional): message value. Must be type bytes, or be serializable to bytes via configured value_serializer. If value is None, key is required and message...
Invoking this method makes all buffered records immediately available to send (even if linger_ms is greater than 0) and blocks on the completion of the requests associated with these records. The post-condition of :meth:`~kafka.KafkaProducer.flush` is that any previously sent record will...
Validate that the record size isn't too large. def _ensure_valid_record_size(self, size): """Validate that the record size isn't too large.""" if size > self.config['max_request_size']: raise Errors.MessageSizeTooLargeError( "The message is %d bytes when serialized which is ...
Wait for cluster metadata including partitions for the given topic to be available. Arguments: topic (str): topic we want metadata for max_wait (float): maximum time in secs for waiting on the metadata Returns: set: partition ids for the topic Raise...
Get metrics on producer performance. This is ported from the Java Producer, for details see: https://kafka.apache.org/documentation/#producer_monitoring Warning: This is an unstable interface. It may change in future releases without warning. def metrics(self, raw=Fals...
Update offsets using auto_offset_reset policy (smallest|largest) Arguments: partition (int): the partition for which offsets should be updated Returns: Updated offset on success, None on failure def reset_partition_offset(self, partition): """Update offsets using auto_offset_reset...
Alter the current offset in the consumer, similar to fseek Arguments: offset: how much to modify the offset whence: where to modify it from, default is None * None is an absolute offset * 0 is relative to the earliest available offset (head) ...
Fetch the specified number of messages Keyword Arguments: count: Indicates the maximum number of messages to be fetched block: If True, the API will block till all messages are fetched. If block is a positive integer the API will block until that many mes...
If no messages can be fetched, returns None. If get_partition_info is None, it defaults to self.partition_info If get_partition_info is True, returns (partition, message) If get_partition_info is False, returns message def _get_message(self, block=True, timeout=0.1, get_partition_info=None, ...
Encode an integer to a varint presentation. See https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints on how those can be produced. Arguments: num (int): Value to encode Returns: bytearray: Encoded presentation of integer with length from 1 to 10 ...
Number of bytes needed to encode an integer in variable-length format. def size_of_varint_1(value): """ Number of bytes needed to encode an integer in variable-length format. """ value = (value << 1) ^ (value >> 63) res = 0 while True: res += 1 value = value >> 7 if value ==...
Number of bytes needed to encode an integer in variable-length format. def size_of_varint_2(value): """ Number of bytes needed to encode an integer in variable-length format. """ value = (value << 1) ^ (value >> 63) if value <= 0x7f: return 1 if value <= 0x3fff: return 2 if valu...
Decode an integer from a varint presentation. See https://developers.google.com/protocol-buffers/docs/encoding?csw=1#varints on how those can be produced. Arguments: buffer (bytes-like): any object acceptable by ``memoryview`` pos (int): optional position to read from R...
Timeout any windows that have expired in the absence of any events def purge_obsolete_samples(self, config, now): """ Timeout any windows that have expired in the absence of any events """ expire_age = config.samples * config.time_window_ms for sample in self._samples: ...
Close the KafkaAdminClient connection to the Kafka broker. def close(self): """Close the KafkaAdminClient connection to the Kafka broker.""" if not hasattr(self, '_closed') or self._closed: log.info("KafkaAdminClient already closed.") return self._metrics.close() ...
Find the latest version of the protocol operation supported by both this library and the broker. This resolves to the lesser of either the latest api version this library supports, or the max version supported by the broker. :param operation: A list of protocol operation versions from ...
Determine the Kafka cluster controller. def _refresh_controller_id(self): """Determine the Kafka cluster controller.""" version = self._matching_api_version(MetadataRequest) if 1 <= version <= 6: request = MetadataRequest[version]() response = self._send_request_to_node(...
Find the broker node_id of the coordinator of the given group. Sends a FindCoordinatorRequest message to the cluster. Will block until the FindCoordinatorResponse is received. Any errors are immediately raised. :param group_id: The consumer group ID. This is typically the group ...
Send a Kafka protocol message to a specific broker. Will block until the message result is received. :param node_id: The broker id to which to send the message. :param request: The message to send. :return: The Kafka protocol response for the message. :exception: The exception ...
Send a Kafka protocol message to the cluster controller. Will block until the message result is received. :param request: The message to send. :return: The Kafka protocol response for the message. def _send_request_to_controller(self, request): """Send a Kafka protocol message to the ...
Create new topics in the cluster. :param new_topics: A list of NewTopic objects. :param timeout_ms: Milliseconds to wait for new topics to be created before the broker returns. :param validate_only: If True, don't actually create new topics. Not supported by all versions...
Delete topics from the cluster. :param topics: A list of topic name strings. :param timeout_ms: Milliseconds to wait for topics to be deleted before the broker returns. :return: Appropriate version of DeleteTopicsResponse class. def delete_topics(self, topics, timeout_ms=None): ...
Fetch configuration parameters for one or more Kafka resources. :param config_resources: An list of ConfigResource objects. Any keys in ConfigResource.configs dict will be used to filter the result. Setting the configs dict to None will get all values. An empty dict will get...
Alter configuration parameters of one or more Kafka resources. Warning: This is currently broken for BROKER resources because those must be sent to that specific broker, versus this always picks the least-loaded node. See the comment in the source code for details. ...
Create additional partitions for an existing topic. :param topic_partitions: A map of topic name strings to NewPartition objects. :param timeout_ms: Milliseconds to wait for new partitions to be created before the broker returns. :param validate_only: If True, don't actually create ...
Describe a set of consumer groups. Any errors are immediately raised. :param group_ids: A list of consumer group IDs. These are typically the group names as strings. :param group_coordinator_id: The node_id of the groups' coordinator broker. If set to None, it will quer...
List all consumer groups known to the cluster. This returns a list of Consumer Group tuples. The tuples are composed of the consumer group name and the consumer group protocol type. Only consumer groups that store their offsets in Kafka are returned. The protocol type will be a...
Fetch Consumer Group Offsets. Note: This does not verify that the group_id or partitions actually exist in the cluster. As soon as any error is encountered, it is immediately raised. :param group_id: The consumer group id name for which to fetch offsets. :param group_c...
Update CRC-32C checksum with data. Args: crc: 32-bit checksum to update as long. data: byte array, string or iterable over bytes. Returns: 32-bit updated CRC-32C as long. def crc_update(crc, data): """Update CRC-32C checksum with data. Args: crc: 32-bit checksum to updat...
Expire batches if metadata is not available A batch whose metadata is not available should be expired if one of the following is true: * the batch is not in retry AND request timeout has elapsed after it is ready (full or linger.ms has reached). * the batch is in retry...
Add a record to the accumulator, return the append result. The append result will contain the future metadata, and flag for whether the appended batch is full or a new batch is created Arguments: tp (TopicPartition): The topic/partition to which this record is being...
Abort the batches that have been sitting in RecordAccumulator for more than the configured request_timeout due to metadata being unavailable. Arguments: request_timeout_ms (int): milliseconds to timeout cluster (ClusterMetadata): current metadata for kafka cluster ...
Re-enqueue the given record batch in the accumulator to retry. def reenqueue(self, batch): """Re-enqueue the given record batch in the accumulator to retry.""" now = time.time() batch.attempts += 1 batch.last_attempt = now batch.last_append = now batch.set_retry() ...
Get a list of nodes whose partitions are ready to be sent, and the earliest time at which any non-sendable partition will be ready; Also return the flag for whether there are any unknown leaders for the accumulated partition batches. A destination node is ready to send if: * T...
Return whether there is any unsent record in the accumulator. def has_unsent(self): """Return whether there is any unsent record in the accumulator.""" for tp in list(self._batches.keys()): with self._tp_locks[tp]: dq = self._batches[tp] if len(dq): ...
Drain all the data for the given nodes and collate them into a list of batches that will fit within the specified size on a per-node basis. This method attempts to avoid choosing the same topic-node repeatedly. Arguments: cluster (ClusterMetadata): The current cluster metadata ...
Deallocate the record batch. def deallocate(self, batch): """Deallocate the record batch.""" self._incomplete.remove(batch) self._free.deallocate(batch.buffer())
Mark all partitions as ready to send and block until the send is complete def await_flush_completion(self, timeout=None): """ Mark all partitions as ready to send and block until the send is complete """ try: for batch in self._incomplete.all(): log.debug('Wa...
This function is only called when sender is closed forcefully. It will fail all the incomplete batches and return. def abort_incomplete_batches(self): """ This function is only called when sender is closed forcefully. It will fail all the incomplete batches and return. """ ...
Go through incomplete batches and abort them. def _abort_batches(self): """Go through incomplete batches and abort them.""" error = Errors.IllegalStateError("Producer is closed forcefully.") for batch in self._incomplete.all(): tp = batch.topic_partition # Close the batc...
Append a message to the buffer. Returns: RecordMetadata or None if unable to append def append(self, timestamp, key, value, headers=[]): """ Append a message to the buffer. Returns: RecordMetadata or None if unable to append """ if self._closed: return None ...
Write message to messageset buffer with MsgVersion 2 def append(self, offset, timestamp, key, value, headers, # Cache for LOAD_FAST opcodes encode_varint=encode_varint, size_of_varint=size_of_varint, get_type=type, type_int=int, time_time=time.time, byte_like...
Get the upper bound estimate on the size of record def estimate_size_in_bytes(cls, key, value, headers): """ Get the upper bound estimate on the size of record """ return ( cls.HEADER_STRUCT.size + cls.MAX_RECORD_OVERHEAD + cls.size_of(key, value, headers) )
Returns seconds (float) remaining before next heartbeat should be sent def time_to_next_heartbeat(self): """Returns seconds (float) remaining before next heartbeat should be sent""" time_since_last_heartbeat = time.time() - max(self.last_send, self.last_reset) if self.heartbeat_failed: ...
Attempt to determine the family of an address (or hostname) :return: either socket.AF_INET or socket.AF_INET6 or socket.AF_UNSPEC if the address family could not be determined def _address_family(address): """ Attempt to determine the family of an address (or hostname) :r...
Parse the IP and port from a string in the format of: * host_or_ip <- Can be either IPv4 address literal or hostname/fqdn * host_or_ipv4:port <- Can be either IPv4 address literal or hostname/fqdn * [host_or_ip] <- IPv6 address literal * [host_or_ip]:po...
Collects a comma-separated set of hosts (host:port) and optionally randomize the returned list. def collect_hosts(hosts, randomize=True): """ Collects a comma-separated set of hosts (host:port) and optionally randomize the returned list. """ if isinstance(hosts, six.string_types): host...
Returns a list of getaddrinfo structs, optionally filtered to an afi (ipv4 / ipv6) def dns_lookup(host, port, afi=socket.AF_UNSPEC): """Returns a list of getaddrinfo structs, optionally filtered to an afi (ipv4 / ipv6)""" # XXX: all DNS functions in Python are blocking. If we really # want to be non-blocki...
Attempt to connect and return ConnectionState def connect(self): """Attempt to connect and return ConnectionState""" if self.state is ConnectionStates.DISCONNECTED and not self.blacked_out(): self.last_attempt = time.time() next_lookup = self._next_afi_sockaddr() if ...
Return a string representation of the OPTIONAL key-value pairs that can be sent with an OAUTHBEARER initial request. def _token_extensions(self): """ Return a string representation of the OPTIONAL key-value pairs that can be sent with an OAUTHBEARER initial request. """ ...
Return true if we are disconnected from the given node and can't re-establish a connection yet def blacked_out(self): """ Return true if we are disconnected from the given node and can't re-establish a connection yet """ if self.state is ConnectionStates.DISCONNECTED: ...
Return the number of milliseconds to wait, based on the connection state, before attempting to send data. When disconnected, this respects the reconnect backoff time. When connecting, returns 0 to allow non-blocking connect to finish. When connected, returns a very large number to handle...
Returns True if still connecting (this may encompass several different states, such as SSL handshake, authorization, etc). def connecting(self): """Returns True if still connecting (this may encompass several different states, such as SSL handshake, authorization, etc).""" return self.s...
Close socket and fail all in-flight-requests. Arguments: error (Exception, optional): pending in-flight-requests will be failed with this exception. Default: kafka.errors.KafkaConnectionError. def close(self, error=None): """Close socket and fail all in-flig...
Queue request for async network send, return Future() def send(self, request, blocking=True): """Queue request for async network send, return Future()""" future = Future() if self.connecting(): return future.failure(Errors.NodeNotReadyError(str(self))) elif not self.connecte...
Can block on network if request is larger than send_buffer_bytes def send_pending_requests(self): """Can block on network if request is larger than send_buffer_bytes""" try: with self._lock: if not self._can_send_recv(): return Errors.NodeNotReadyError(st...
Non-blocking network receive. Return list of (response, future) tuples def recv(self): """Non-blocking network receive. Return list of (response, future) tuples """ responses = self._recv() if not responses and self.requests_timed_out(): log.warning('%s tim...
Take all available bytes from socket, return list of any responses from parser def _recv(self): """Take all available bytes from socket, return list of any responses from parser""" recvd = [] self._lock.acquire() if not self._can_send_recv(): log.warning('%s cannot recv: soc...
Attempt to guess the broker version. Note: This is a blocking call. Returns: version tuple, i.e. (0, 10), (0, 9), (0, 8, 2), ... def check_version(self, timeout=2, strict=False, topics=[]): """Attempt to guess the broker version. Note: This is a blocking call. Returns: versi...
The main run loop for the sender thread. def run(self): """The main run loop for the sender thread.""" log.debug("Starting Kafka producer I/O thread.") # main loop, runs until close is called while self._running: try: self.run_once() except Excep...
Run a single iteration of sending. def run_once(self): """Run a single iteration of sending.""" while self._topics_to_add: self._client.add_topic(self._topics_to_add.pop()) # get the list of partitions with data ready to send result = self._accumulator.ready(self._metadata)...
Start closing the sender (won't complete until all data is sent). def initiate_close(self): """Start closing the sender (won't complete until all data is sent).""" self._running = False self._accumulator.close() self.wakeup()
Handle a produce response. def _handle_produce_response(self, node_id, send_time, batches, response): """Handle a produce response.""" # if we have a response, parse it log.debug('Parsing produce response: %r', response) if response: batches_by_partition = dict([(batch.topic...
Complete or retry the given batch of records. Arguments: batch (RecordBatch): The record batch error (Exception): The error (or None if none) base_offset (int): The base offset assigned to the records if successful timestamp_ms (int, optional): The timestamp retu...
We can retry a send if the error is transient and the number of attempts taken is fewer than the maximum allowed def _can_retry(self, batch, error): """ We can retry a send if the error is transient and the number of attempts taken is fewer than the maximum allowed """ r...
Transfer the record batches into a list of produce requests on a per-node basis. Arguments: collated: {node_id: [RecordBatch]} Returns: dict: {node_id: ProduceRequest} (version depends on api_version) def _create_produce_requests(self, collated): """ Tr...
Create a produce request from the given record batches. Returns: ProduceRequest (version depends on api_version) def _produce_request(self, node_id, acks, timeout, batches): """Create a produce request from the given record batches. Returns: ProduceRequest (version dep...
Encodes the given data with snappy compression. If xerial_compatible is set then the stream is encoded in a fashion compatible with the xerial snappy library. The block size (xerial_blocksize) controls how frequent the blocking occurs 32k is the default in the xerial library. The format winds up ...
Detects if the data given might have been encoded with the blocking mode of the xerial snappy library. This mode writes a magic header of the format: +--------+--------------+------------+---------+--------+ | Marker | Magic String | Null / Pad | Version | Compat | +...
Decode payload using interoperable LZ4 framing. Requires Kafka >= 0.10 def lz4f_decode(payload): """Decode payload using interoperable LZ4 framing. Requires Kafka >= 0.10""" # pylint: disable-msg=no-member ctx = lz4f.createDecompContext() data = lz4f.decompressFrame(payload, ctx) lz4f.freeDecompCon...
Encode payload for 0.8/0.9 brokers -- requires an incorrect header checksum. def lz4_encode_old_kafka(payload): """Encode payload for 0.8/0.9 brokers -- requires an incorrect header checksum.""" assert xxhash is not None data = lz4_encode(payload) header_size = 7 flg = data[4] if not isinstance...
Get all BrokerMetadata Returns: set: {BrokerMetadata, ...} def brokers(self): """Get all BrokerMetadata Returns: set: {BrokerMetadata, ...} """ return set(self._brokers.values()) or set(self._bootstrap_brokers.values())