text
stringlengths
81
112k
Infer schema from an RDD of Row or tuple. :param rdd: an RDD of Row or tuple :param samplingRatio: sampling ratio, or no sampling (default) :return: :class:`pyspark.sql.types.StructType` def _inferSchema(self, rdd, samplingRatio=None, names=None): """ Infer schema from an RDD o...
Create an RDD for DataFrame from an existing RDD, returns the RDD and schema. def _createFromRDD(self, rdd, schema, samplingRatio): """ Create an RDD for DataFrame from an existing RDD, returns the RDD and schema. """ if schema is None or isinstance(schema, (list, tuple)): s...
Create an RDD for DataFrame from a list or pandas.DataFrame, returns the RDD and schema. def _createFromLocal(self, data, schema): """ Create an RDD for DataFrame from a list or pandas.DataFrame, returns the RDD and schema. """ # make sure data could consumed multiple ti...
Used when converting a pandas.DataFrame to Spark using to_records(), this will correct the dtypes of fields in a record so they can be properly loaded into Spark. :param rec: a numpy record to check field dtypes :return corrected dtype for a numpy.record or None if no correction needed def _get...
Convert a pandas.DataFrame to list of records that can be used to make a DataFrame :return list of records def _convert_from_pandas(self, pdf, schema, timezone): """ Convert a pandas.DataFrame to list of records that can be used to make a DataFrame :return list of records """...
Create a DataFrame from a given pandas.DataFrame by slicing it into partitions, converting to Arrow data, then sending to the JVM to parallelize. If a schema is passed in, the data types will be used to coerce the data in Pandas to Arrow conversion. def _create_from_pandas_with_arrow(self, pdf, schema,...
Initialize a SparkSession for a pyspark shell session. This is called from shell.py to make error handling simpler without needing to declare local variables in that script, which would expose those to users. def _create_shell_session(): """ Initialize a SparkSession for a pyspark shell...
Creates a :class:`DataFrame` from an :class:`RDD`, a list or a :class:`pandas.DataFrame`. When ``schema`` is a list of column names, the type of each column will be inferred from ``data``. When ``schema`` is ``None``, it will try to infer the schema (column names and types) from ``data...
Returns a :class:`DataFrame` representing the result of the given query. :return: :class:`DataFrame` >>> df.createOrReplaceTempView("table1") >>> df2 = spark.sql("SELECT field1 AS f1, field2 as f2 from table1") >>> df2.collect() [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Ro...
Returns the specified table as a :class:`DataFrame`. :return: :class:`DataFrame` >>> df.createOrReplaceTempView("table1") >>> df2 = spark.table("table1") >>> sorted(df.collect()) == sorted(df2.collect()) True def table(self, tableName): """Returns the specified table a...
Returns a :class:`StreamingQueryManager` that allows managing all the :class:`StreamingQuery` StreamingQueries active on `this` context. .. note:: Evolving. :return: :class:`StreamingQueryManager` def streams(self): """Returns a :class:`StreamingQueryManager` that allows managing all ...
Stop the underlying :class:`SparkContext`. def stop(self): """Stop the underlying :class:`SparkContext`. """ self._sc.stop() # We should clean the default session up. See SPARK-23228. self._jvm.SparkSession.clearDefaultSession() self._jvm.SparkSession.clearActiveSession(...
Returns a :class:`SparkJobInfo` object, or None if the job info could not be found or was garbage collected. def getJobInfo(self, jobId): """ Returns a :class:`SparkJobInfo` object, or None if the job info could not be found or was garbage collected. """ job = self._jtra...
Returns a :class:`SparkStageInfo` object, or None if the stage info could not be found or was garbage collected. def getStageInfo(self, stageId): """ Returns a :class:`SparkStageInfo` object, or None if the stage info could not be found or was garbage collected. """ stag...
Restore an object of namedtuple def _restore(name, fields, value): """ Restore an object of namedtuple""" k = (name, fields) cls = __cls.get(k) if cls is None: cls = collections.namedtuple(name, fields) __cls[k] = cls return cls(*value)
Make class generated by namedtuple picklable def _hack_namedtuple(cls): """ Make class generated by namedtuple picklable """ name = cls.__name__ fields = cls._fields def __reduce__(self): return (_restore, (name, fields, tuple(self))) cls.__reduce__ = __reduce__ cls._is_namedtuple_ = T...
Hack namedtuple() to make it picklable def _hijack_namedtuple(): """ Hack namedtuple() to make it picklable """ # hijack only one time if hasattr(collections.namedtuple, "__hijack"): return global _old_namedtuple # or it will put in closure global _old_namedtuple_kwdefaults # or it will ...
Load a stream of un-ordered Arrow RecordBatches, where the last iteration yields a list of indices that can be used to put the RecordBatches in the correct order. def load_stream(self, stream): """ Load a stream of un-ordered Arrow RecordBatches, where the last iteration yields a list o...
Create an Arrow record batch from the given pandas.Series or list of Series, with optional type. :param series: A single pandas.Series, list of Series, or list of (series, arrow_type) :return: Arrow RecordBatch def _create_batch(self, series): """ Create an Arrow record batch f...
Make ArrowRecordBatches from Pandas Series and serialize. Input is a single series or a list of series accompanied by an optional pyarrow type to coerce the data to. def dump_stream(self, iterator, stream): """ Make ArrowRecordBatches from Pandas Series and serialize. Input is a single series o...
Deserialize ArrowRecordBatches to an Arrow table and return as a list of pandas.Series. def load_stream(self, stream): """ Deserialize ArrowRecordBatches to an Arrow table and return as a list of pandas.Series. """ batches = super(ArrowStreamPandasSerializer, self).load_stream(stream) ...
Override because Pandas UDFs require a START_ARROW_STREAM before the Arrow stream is sent. This should be sent after creating the first record batch so in case of an error, it can be sent back to the JVM before the Arrow stream starts. def dump_stream(self, iterator, stream): """ Overri...
Waits for the termination of `this` query, either by :func:`query.stop()` or by an exception. If the query has terminated with an exception, then the exception will be thrown. If `timeout` is set, it returns whether the query has terminated or not within the `timeout` seconds. If the qu...
Returns an array of the most recent [[StreamingQueryProgress]] updates for this query. The number of progress updates retained for each stream is configured by Spark session configuration `spark.sql.streaming.numRecentProgressUpdates`. def recentProgress(self): """Returns an array of the most r...
Returns the most recent :class:`StreamingQueryProgress` update of this streaming query or None if there were no progress updates :return: a map def lastProgress(self): """ Returns the most recent :class:`StreamingQueryProgress` update of this streaming query or None if there wer...
:return: the StreamingQueryException if the query was terminated by an exception, or None. def exception(self): """ :return: the StreamingQueryException if the query was terminated by an exception, or None. """ if self._jsq.exception().isDefined(): je = self._jsq.exception()...
Wait until any of the queries on the associated SQLContext has terminated since the creation of the context, or since :func:`resetTerminated()` was called. If any query was terminated with an exception, then the exception will be thrown. If `timeout` is set, it returns whether the query has term...
Loads a data stream from a data source and returns it as a :class`DataFrame`. .. note:: Evolving. :param path: optional string for file-system backed data sources. :param format: optional string for format of the data source. Default to 'parquet'. :param schema: optional :class:`pyspar...
Loads a JSON file stream and returns the results as a :class:`DataFrame`. `JSON Lines <http://jsonlines.org/>`_ (newline-delimited JSON) is supported by default. For JSON (one record per file), set the ``multiLine`` parameter to ``true``. If the ``schema`` parameter is not specified, this func...
Loads a ORC file stream, returning the result as a :class:`DataFrame`. .. note:: Evolving. >>> orc_sdf = spark.readStream.schema(sdf_schema).orc(tempfile.mkdtemp()) >>> orc_sdf.isStreaming True >>> orc_sdf.schema == sdf_schema True def orc(self, path): """Loads...
Loads a Parquet file stream, returning the result as a :class:`DataFrame`. You can set the following Parquet-specific option(s) for reading Parquet files: * ``mergeSchema``: sets whether we should merge schemas collected from all \ Parquet part-files. This will override ``spark.sql....
Loads a text file stream and returns a :class:`DataFrame` whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8. By default, each line in the text file is a new row in the resulting DataFrame...
r"""Loads a CSV file stream and returns the result as a :class:`DataFrame`. This function will go through the input once to determine the input schema if ``inferSchema`` is enabled. To avoid going through the entire data once, disable ``inferSchema`` option or specify the schema explicitly usin...
Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink. Options include: * `append`:Only the new rows in the streaming DataFrame/Dataset will be written to the sink * `complete`:All the rows in the streaming DataFrame/Dataset will be written to the sink ...
Specifies the name of the :class:`StreamingQuery` that can be started with :func:`start`. This name must be unique among all the currently active queries in the associated SparkSession. .. note:: Evolving. :param queryName: unique name for the query >>> writer = sdf.writeStrea...
Set the trigger for the stream query. If this is not set it will run the query as fast as possible, which is equivalent to setting the trigger to ``processingTime='0 seconds'``. .. note:: Evolving. :param processingTime: a processing time interval as a string, e.g. '5 seconds', '1 minute'. ...
Sets the output of the streaming query to be processed using the provided writer ``f``. This is often used to write the output of a streaming query to arbitrary storage systems. The processing logic can be specified in two ways. #. A **function** that takes a row as input. This is a...
Sets the output of the streaming query to be processed using the provided function. This is supported only the in the micro-batch execution modes (that is, when the trigger is not continuous). In every micro-batch, the provided function will be called in every micro-batch with (i) the output row...
Streams the contents of the :class:`DataFrame` to a data source. The data source is specified by the ``format`` and a set of ``options``. If ``format`` is not specified, the default data source configured by ``spark.sql.sources.default`` will be used. .. note:: Evolving. :para...
Get the Python compiler to emit LOAD_FAST(arg); STORE_DEREF Notes ----- In Python 3, we could use an easier function: .. code-block:: python def f(): cell = None def _stub(value): nonlocal cell cell = value return _stub ...
Return whether *func* is a Tornado coroutine function. Running coroutines are not supported. def is_tornado_coroutine(func): """ Return whether *func* is a Tornado coroutine function. Running coroutines are not supported. """ if 'tornado.gen' not in sys.modules: return False gen = s...
Serialize obj as bytes streamed into file protocol defaults to cloudpickle.DEFAULT_PROTOCOL which is an alias to pickle.HIGHEST_PROTOCOL. This setting favors maximum communication speed between processes running the same Python version. Set protocol=pickle.DEFAULT_PROTOCOL instead if you need to ensur...
Serialize obj as a string of bytes allocated in memory protocol defaults to cloudpickle.DEFAULT_PROTOCOL which is an alias to pickle.HIGHEST_PROTOCOL. This setting favors maximum communication speed between processes running the same Python version. Set protocol=pickle.DEFAULT_PROTOCOL instead if you ...
Fills in the rest of function data into the skeleton function object The skeleton itself is create by _make_skel_func(). def _fill_function(*args): """Fills in the rest of function data into the skeleton function object The skeleton itself is create by _make_skel_func(). """ if len(args) == 2: ...
Put attributes from `class_dict` back on `skeleton_class`. See CloudPickler.save_dynamic_class for more info. def _rehydrate_skeleton_class(skeleton_class, class_dict): """Put attributes from `class_dict` back on `skeleton_class`. See CloudPickler.save_dynamic_class for more info. """ registry = ...
Return True if the module is special module that cannot be imported by its name. def _is_dynamic(module): """ Return True if the module is special module that cannot be imported by its name. """ # Quick check: module that have __file__ attribute are not dynamic modules. if hasattr(module, '...
Save a code object def save_codeobject(self, obj): """ Save a code object """ if PY3: # pragma: no branch args = ( obj.co_argcount, obj.co_kwonlyargcount, obj.co_nlocals, obj.co_stacksize, obj.co_flags, obj.co_code, obj.co_consts, obj.co_name...
Registered with the dispatch to handle all function types. Determines what kind of function obj is (e.g. lambda, defined at interactive prompt, etc) and handles the pickling appropriately. def save_function(self, obj, name=None): """ Registered with the dispatch to handle all function types. ...
Save a class that can't be stored as module global. This method is used to serialize classes that are defined inside functions, or that otherwise can't be serialized as attribute lookups from global modules. def save_dynamic_class(self, obj): """ Save a class that can't be stor...
Pickles an actual func object. A func comprises: code, globals, defaults, closure, and dict. We extract and save these, injecting reducing functions at certain points to recreate the func object. Keep in mind that some of these pieces can contain a ref to the func itself. Thus, a nai...
Save a "global". The name of this method is somewhat misleading: all types get dispatched here. def save_global(self, obj, name=None, pack=struct.pack): """ Save a "global". The name of this method is somewhat misleading: all types get dispatched here. """ ...
Inner logic to save instance. Based off pickle.save_inst def save_inst(self, obj): """Inner logic to save instance. Based off pickle.save_inst""" cls = obj.__class__ # Try the dispatch table (pickle module doesn't do it) f = self.dispatch.get(cls) if f: f(self, obj)...
itemgetter serializer (needed for namedtuple support) def save_itemgetter(self, obj): """itemgetter serializer (needed for namedtuple support)""" class Dummy: def __getitem__(self, item): return item items = obj(Dummy()) if not isinstance(items, tuple): ...
attrgetter serializer def save_attrgetter(self, obj): """attrgetter serializer""" class Dummy(object): def __init__(self, attrs, index=None): self.attrs = attrs self.index = index def __getattribute__(self, item): attrs = object.__...
Copy the current param to a new parent, must be a dummy param. def _copy_new_parent(self, parent): """Copy the current param to a new parent, must be a dummy param.""" if self.parent == "undefined": param = copy.copy(self) param.parent = parent.uid return param ...
Convert a value to a list, if possible. def toList(value): """ Convert a value to a list, if possible. """ if type(value) == list: return value elif type(value) in [np.ndarray, tuple, xrange, array.array]: return list(value) elif isinstance(value,...
Convert a value to list of floats, if possible. def toListFloat(value): """ Convert a value to list of floats, if possible. """ if TypeConverters._can_convert_to_list(value): value = TypeConverters.toList(value) if all(map(lambda v: TypeConverters._is_numeric(v),...
Convert a value to list of ints, if possible. def toListInt(value): """ Convert a value to list of ints, if possible. """ if TypeConverters._can_convert_to_list(value): value = TypeConverters.toList(value) if all(map(lambda v: TypeConverters._is_integer(v), value...
Convert a value to list of strings, if possible. def toListString(value): """ Convert a value to list of strings, if possible. """ if TypeConverters._can_convert_to_list(value): value = TypeConverters.toList(value) if all(map(lambda v: TypeConverters._can_convert...
Convert a value to a MLlib Vector, if possible. def toVector(value): """ Convert a value to a MLlib Vector, if possible. """ if isinstance(value, Vector): return value elif TypeConverters._can_convert_to_list(value): value = TypeConverters.toList(value) ...
Convert a value to a string, if possible. def toString(value): """ Convert a value to a string, if possible. """ if isinstance(value, basestring): return value elif type(value) in [np.string_, np.str_]: return str(value) elif type(value) == np.uni...
Copy all params defined on the class to current object. def _copy_params(self): """ Copy all params defined on the class to current object. """ cls = type(self) src_name_attrs = [(x, getattr(cls, x)) for x in dir(cls)] src_params = list(filter(lambda nameAttr: isinstance...
Returns all params ordered by name. The default implementation uses :py:func:`dir` to get all attributes of type :py:class:`Param`. def params(self): """ Returns all params ordered by name. The default implementation uses :py:func:`dir` to get all attributes of type :py:...
Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. def explainParam(self, param): """ Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. """ pa...
Gets a param by its name. def getParam(self, paramName): """ Gets a param by its name. """ param = getattr(self, paramName) if isinstance(param, Param): return param else: raise ValueError("Cannot find param with name %s." % paramName)
Checks whether a param is explicitly set by user. def isSet(self, param): """ Checks whether a param is explicitly set by user. """ param = self._resolveParam(param) return param in self._paramMap
Checks whether a param has a default value. def hasDefault(self, param): """ Checks whether a param has a default value. """ param = self._resolveParam(param) return param in self._defaultParamMap
Tests whether this instance contains a param with a given (string) name. def hasParam(self, paramName): """ Tests whether this instance contains a param with a given (string) name. """ if isinstance(paramName, basestring): p = getattr(self, paramName, None) ...
Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. def getOrDefault(self, param): """ Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. """ para...
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. :param...
Creates a copy of this instance with the same uid and some extra params. The default implementation creates a shallow copy using :py:func:`copy.copy`, and then copies the embedded and extra parameters over and returns the copy. Subclasses should override this method if the default approa...
Sets a parameter in the embedded param map. def set(self, param, value): """ Sets a parameter in the embedded param map. """ self._shouldOwn(param) try: value = param.typeConverter(value) except ValueError as e: raise ValueError('Invalid param val...
Validates that the input param belongs to this Params instance. def _shouldOwn(self, param): """ Validates that the input param belongs to this Params instance. """ if not (self.uid == param.parent and self.hasParam(param.name)): raise ValueError("Param %r does not belong to...
Resolves a param and validates the ownership. :param param: param name or the param instance, which must belong to this Params instance :return: resolved param instance def _resolveParam(self, param): """ Resolves a param and validates the ownership. :par...
Sets user-supplied params. def _set(self, **kwargs): """ Sets user-supplied params. """ for param, value in kwargs.items(): p = getattr(self, param) if value is not None: try: value = p.typeConverter(value) exce...
Sets default params. def _setDefault(self, **kwargs): """ Sets default params. """ for param, value in kwargs.items(): p = getattr(self, param) if value is not None and not isinstance(value, JavaObject): try: value = p.typeConv...
Copies param values from this instance to another instance for params shared by them. :param to: the target instance :param extra: extra params to be copied :return: the target instance with param values copied def _copyValues(self, to, extra=None): """ Copies param val...
Changes the uid of this instance. This updates both the stored uid and the parent uid of params and param maps. This is used by persistence (loading). :param newUid: new uid to use, which is converted to unicode :return: same instance, but with the uid and Param.parent values ...
Return an JavaRDD of Object by unpickling It will convert each Python object into Java object by Pyrolite, whenever the RDD is serialized in batch or not. def _to_java_object_rdd(rdd): """ Return an JavaRDD of Object by unpickling It will convert each Python object into Java object by Pyrolite, whene...
Return the broadcasted value def value(self): """ Return the broadcasted value """ if not hasattr(self, "_value") and self._path is not None: # we only need to decrypt it here when encryption is enabled and # if its on the driver, since executor decryption is handled alr...
Delete cached copies of this broadcast on the executors. If the broadcast is used after this is called, it will need to be re-sent to each executor. :param blocking: Whether to block until unpersisting has completed def unpersist(self, blocking=False): """ Delete cached copies ...
Destroy all data and metadata related to this broadcast variable. Use this with caution; once a broadcast variable has been destroyed, it cannot be used again. .. versionchanged:: 3.0.0 Added optional argument `blocking` to specify whether to block until all blocks are del...
Wrap this udf with a function and attach docstring from func def _wrapped(self): """ Wrap this udf with a function and attach docstring from func """ # It is possible for a callable instance without __name__ attribute or/and # __module__ attribute to be wrapped here. For exampl...
Register a Python function (including lambda function) or a user-defined function as a SQL function. :param name: name of the user-defined function in SQL statements. :param f: a Python function, or a user-defined function. The user-defined function can be either row-at-a-time or ve...
Register a Java user-defined function as a SQL function. In addition to a name and the function itself, the return type can be optionally specified. When the return type is not specified we would infer it via reflection. :param name: name of the user-defined function :param javaClassNa...
Register a Java user-defined aggregate function as a SQL function. :param name: name of the user-defined aggregate function :param javaClassName: fully qualified name of java class >>> spark.udf.registerJavaUDAF("javaUDAF", "test.org.apache.spark.sql.MyDoubleAvg") >>> df = spark.create...
Either recreate a StreamingContext from checkpoint data or create a new StreamingContext. If checkpoint data exists in the provided `checkpointPath`, then StreamingContext will be recreated from the checkpoint data. If the data does not exist, then the provided setupFunc will be used to create a...
Return either the currently active StreamingContext (i.e., if there is a context started but not stopped) or None. def getActive(cls): """ Return either the currently active StreamingContext (i.e., if there is a context started but not stopped) or None. """ activePythonC...
Either return the active StreamingContext (i.e. currently started but not stopped), or recreate a StreamingContext from checkpoint data or create a new StreamingContext using the provided setupFunc function. If the checkpointPath is None or does not contain valid checkpoint data, then setupFunc ...
Wait for the execution to stop. @param timeout: time to wait in seconds def awaitTermination(self, timeout=None): """ Wait for the execution to stop. @param timeout: time to wait in seconds """ if timeout is None: self._jssc.awaitTermination() else:...
Stop the execution of the streams, with option of ensuring all received data has been processed. @param stopSparkContext: Stop the associated SparkContext or not @param stopGracefully: Stop gracefully by waiting for the processing of all received data to be complet...
Create an input from TCP source hostname:port. Data is received using a TCP socket and receive byte is interpreted as UTF8 encoded ``\\n`` delimited lines. @param hostname: Hostname to connect to for receiving data @param port: Port to connect to for receiving data ...
Create an input stream that monitors a Hadoop-compatible file system for new files and reads them as text files. Files must be wrriten to the monitored directory by "moving" them from another location within the same file system. File names starting with . are ignored. The text files mus...
Create an input stream that monitors a Hadoop-compatible file system for new files and reads them as flat binary files with records of fixed length. Files must be written to the monitored directory by "moving" them from another location within the same file system. File names starting wi...
Create an input stream from a queue of RDDs or list. In each batch, it will process either one or all of the RDDs returned by the queue. .. note:: Changes to the queue after the stream is created will not be recognized. @param rdds: Queue of RDDs @param oneAtATime: pick one rdd e...
Create a new DStream in which each RDD is generated by applying a function on RDDs of the DStreams. The order of the JavaRDDs in the transform function parameter will be the same as the order of corresponding DStreams in the list. def transform(self, dstreams, transformFunc): """ ...
Create a unified DStream from multiple DStreams of the same type and same slide duration. def union(self, *dstreams): """ Create a unified DStream from multiple DStreams of the same type and same slide duration. """ if not dstreams: raise ValueError("should h...
Add a [[org.apache.spark.streaming.scheduler.StreamingListener]] object for receiving system events related to streaming. def addStreamingListener(self, streamingListener): """ Add a [[org.apache.spark.streaming.scheduler.StreamingListener]] object for receiving system events related to...
Load tf checkpoints in a pytorch model def load_tf_weights_in_gpt2(model, gpt2_checkpoint_path): """ Load tf checkpoints in a pytorch model """ try: import re import numpy as np import tensorflow as tf except ImportError: print("Loading a TensorFlow models in PyTorch, re...
Constructs a `GPT2Config` from a json file of parameters. def from_json_file(cls, json_file): """Constructs a `GPT2Config` from a json file of parameters.""" with open(json_file, "r", encoding="utf-8") as reader: text = reader.read() return cls.from_dict(json.loads(text))