text
stringlengths
81
112k
This logs the time usage of a code block def time_logger(name): """This logs the time usage of a code block""" start_time = time.time() yield end_time = time.time() total_time = end_time - start_time logging.info("%s; time: %ss", name, total_time)
Initializes ray based on environment variables and internal defaults. def initialize_ray(): """Initializes ray based on environment variables and internal defaults.""" if threading.current_thread().name == "MainThread": plasma_directory = None object_store_memory = os.environ.get("MODIN_MEMORY"...
Applies func to the object. See notes in Parent class about this method. Args: func: The function to apply. num_splits: The number of times to split the result object. other_axis_partition: Another `DaskFrameAxisPartition` object to apply to func wit...
Convert categorical variable into indicator variables. Args: data (array-like, Series, or DataFrame): data to encode. prefix (string, [string]): Prefix to apply to each encoded column label. prefix_sep (string, [string]): Separator between prefix and value...
Applies func to the object in the plasma store. See notes in Parent class about this method. Args: func: The function to apply. num_splits: The number of times to split the result object. other_axis_partition: Another `PandasOnRayFrameAxisPartition` object to apply ...
Shuffle the order of the data in this axis based on the `lengths`. Extends `BaseFrameAxisPartition.shuffle`. Args: func: The function to apply before splitting. lengths: The list of partition lengths to split the result into. Returns: A list of RemotePartit...
Deploy a function along a full axis in Ray. Args: axis: The axis to perform the function along. func: The function to perform. num_splits: The number of splits to return (see `split_result_of_axis_func_pandas`) kwargs: A di...
Deploy a function along a full axis between two data sets in Ray. Args: axis: The axis to perform the function along. func: The function to perform. num_splits: The number of splits to return (see `split_result_of_axis_func_pandas`). len_of_left: ...
Query columns of the DataManager with a boolean expression. Args: expr: Boolean expression to query the columns with. Returns: DataManager containing the rows where the boolean expression is satisfied. def query(self, expr, **kwargs): """Query columns of the Dat...
Converts Modin DataFrame to Pandas DataFrame. Returns: Pandas DataFrame of the DataManager. def to_pandas(self): """Converts Modin DataFrame to Pandas DataFrame. Returns: Pandas DataFrame of the DataManager. """ df = self.data.to_pandas(is_tran...
Deploy a function along a full axis in Ray. Args: axis: The axis to perform the function along. func: The function to perform. num_splits: The number of splits to return (see `split_result_of_axis_func_pandas`) kwargs: A dictionary of keyword arguments. partition...
Deploy a function along a full axis between two data sets in Ray. Args: axis: The axis to perform the function along. func: The function to perform. num_splits: The number of splits to return (see `split_result_of_axis_func_pandas`). len_of_left: The number of values in ...
Applies func to the object in the plasma store. See notes in Parent class about this method. Args: func: The function to apply. num_splits: The number of times to split the result object. other_axis_partition: Another `PyarrowOnRayFrameAxisPartition` object to apply...
Shuffle the order of the data in this axis based on the `func`. Extends `BaseFrameAxisPartition.shuffle`. :param func: :param num_splits: :param kwargs: :return: def shuffle(self, func, num_splits=None, **kwargs): """Shuffle the order of the data in this axis based on ...
Deploy a function to a partition in Ray. Args: func: The function to apply. partition: The partition to apply the function to. kwargs: A dictionary of keyword arguments for the function. Returns: The result of the function. def deploy_ray_func(func, partition, kwargs): """...
Gets the object out of the plasma store. Returns: The object from the plasma store. def get(self): """Gets the object out of the plasma store. Returns: The object from the plasma store. """ if len(self.call_queue): return self.apply(lambda x...
Apply a function to the object stored in this partition. Note: It does not matter if func is callable or an ObjectID. Ray will handle it correctly either way. The keyword arguments are sent as a dictionary. Args: func: The function to apply. Returns: ...
Convert the object stored in this partition to a Pandas DataFrame. Returns: A Pandas DataFrame. def to_pandas(self): """Convert the object stored in this partition to a Pandas DataFrame. Returns: A Pandas DataFrame. """ dataframe = self.get().to_pandas(...
Put an object in the Plasma store and wrap it in this object. Args: obj: The object to be put. Returns: A `RayRemotePartition` object. def put(cls, obj): """Put an object in the Plasma store and wrap it in this object. Args: obj: The object to be p...
Detect missing values for an array-like object. Args: obj: Object to check for null or missing values. Returns: bool or array-like of bool def isna(obj): """ Detect missing values for an array-like object. Args: obj: Object to check for null or missing values. Returns:...
Database style join, where common columns in "on" are merged. Args: left: DataFrame. right: DataFrame. how: What type of join to use. on: The common column name(s) to join on. If None, and left_on and right_on are also None, will default to all commonly named ...
Check if is possible distribute a query given that args Args: partition_column: column used to share the data between the workers lower_bound: the minimum value to be requested from the partition_column upper_bound: the maximum value to be requested from the partition_column Returns: ...
Check with the given sql arg is query or table Args: engine: SQLAlchemy connection engine sql: SQL query or table name Returns: True for table or False if not def is_table(engine, sql): """ Check with the given sql arg is query or table Args: engine: SQLAlchemy connec...
Extract all useful infos from the given table Args: engine: SQLAlchemy connection engine table: table name Returns: Dictionary of infos def get_table_metadata(engine, table): """ Extract all useful infos from the given table Args: engine: SQLAlchemy connection engine ...
Extract columns names and python typos from metadata Args: metadata: Table metadata Returns: dict with columns names and python types def get_table_columns(metadata): """ Extract columns names and python typos from metadata Args: metadata: Table metadata Returns: ...
Check query sanity Args: query: query string Returns: None def check_query(query): """ Check query sanity Args: query: query string Returns: None """ q = query.lower() if "select " not in q: raise InvalidQuery("SELECT word not found in the que...
Extract columns names and python typos from query Args: engine: SQLAlchemy connection engine query: SQL query Returns: dict with columns names and python types def get_query_columns(engine, query): """ Extract columns names and python typos from query Args: engine: SQ...
Check partition_column existence and type Args: partition_column: partition_column name cols: dict with columns names and python types Returns: None def check_partition_column(partition_column, cols): """ Check partition_column existence and type Args: partition_colum...
Return a columns name list and the query string Args: sql: SQL query or table name con: database connection or url string partition_column: column used to share the data between the workers Returns: Columns name list and query string def get_query_info(sql, con, partition_colu...
Put bounders in the query Args: query: SQL query string partition_column: partition_column name start: lower_bound end: upper_bound Returns: Query with bounders def query_put_bounders(query, partition_column, start, end): """ Put bounders in the query Args: ...
Computes the index after a number of rows have been removed. Note: In order for this to be used properly, the indexes must not be changed before you compute this. Args: axis: The axis to extract the index from. data_object: The new data object to extract the index f...
Prepares methods given various metadata. Args: pandas_func: The function to prepare. Returns Helper function which handles potential transpose. def _prepare_method(self, pandas_func, **kwargs): """Prepares methods given various metadata. Args: pandas...
Returns the numeric columns of the Manager. Returns: List of index names. def numeric_columns(self, include_bool=True): """Returns the numeric columns of the Manager. Returns: List of index names. """ columns = [] for col, dtype in zip(self.colu...
Preprocesses numeric functions to clean dataframe and pick numeric indices. Args: axis: '0' if columns and '1' if rows. Returns: Tuple with return value(if any), indices to apply func to & cleaned Manager. def numeric_function_clean_dataframe(self, axis): """Preprocess...
Joins a pair of index objects (columns or rows) by a given strategy. Args: axis: The axis index object to join (0 for columns, 1 for index). other_index: The other_index to join on. how: The type of join to join to make (e.g. right, left). Returns: Joine...
Joins a list or two objects together. Args: other: The other object(s) to join on. Returns: Joined objects. def join(self, other, **kwargs): """Joins a list or two objects together. Args: other: The other object(s) to join on. Returns: ...
Concatenates two objects together. Args: axis: The axis index object to join (0 for columns, 1 for index). other: The other_index to concat with. Returns: Concatenated objects. def concat(self, axis, other, **kwargs): """Concatenates two objects together. ...
Copartition two QueryCompiler objects. Args: axis: The axis to copartition along. other: The other Query Compiler(s) to copartition against. how_to_join: How to manage joining the index object ("left", "right", etc.) sort: Whether or not to sort the joined index....
Converts Modin DataFrame to Pandas DataFrame. Returns: Pandas DataFrame of the DataManager. def to_pandas(self): """Converts Modin DataFrame to Pandas DataFrame. Returns: Pandas DataFrame of the DataManager. """ df = self.data.to_pandas(is_transposed=se...
Improve simple Pandas DataFrame to an advanced and superior Modin DataFrame. Args: cls: DataManger object to convert the DataFrame to. df: Pandas DataFrame object. block_partitions_cls: BlockParitions object to store partitions Returns: Returns DataManag...
Inter-data operations (e.g. add, sub). Args: other: The other Manager for the operation. how_to_join: The type of join to join to make (e.g. right, outer). Returns: New DataManager with new data and index. def _inter_manager_operations(self, other, how_to_join, fun...
Helper method for inter-manager and scalar operations. Args: func: The function to use on the Manager/scalar. other: The other Manager/scalar. Returns: New DataManager with new data and index. def _inter_df_op_handler(self, func, other, **kwargs): """Helper...
Perform an operation between two objects. Note: The list of operations is as follows: - add - eq - floordiv - ge - gt - le - lt - mod - mul - ne - pow - rfloordiv ...
Uses other manager to update corresponding values in this manager. Args: other: The other manager. Returns: New DataManager with updated data and index. def update(self, other, **kwargs): """Uses other manager to update corresponding values in this manager. Ar...
Gets values from this manager where cond is true else from other. Args: cond: Condition on which to evaluate values. Returns: New DataManager with updated data and index. def where(self, cond, other, **kwargs): """Gets values from this manager where cond is true else f...
Handler for mapping scalar operations across a Manager. Args: axis: The axis index object to execute the function on. scalar: The scalar value to map. func: The function to use on the Manager with the scalar. Returns: A new QueryCompiler with updated dat...
Fits a new index for this Manger. Args: axis: The axis index object to target the reindex on. labels: New labels to conform 'axis' on to. Returns: A new QueryCompiler with updated data and new index. def reindex(self, axis, labels, **kwargs): """Fits a new ...
Removes all levels from index and sets a default level_0 index. Returns: A new QueryCompiler with updated data and reset index. def reset_index(self, **kwargs): """Removes all levels from index and sets a default level_0 index. Returns: A new QueryCompiler with updated...
Transposes this DataManager. Returns: Transposed new DataManager. def transpose(self, *args, **kwargs): """Transposes this DataManager. Returns: Transposed new DataManager. """ new_data = self.data.transpose(*args, **kwargs) # Switch the index a...
Apply function that will reduce the data to a Pandas Series. Args: axis: 0 for columns and 1 for rows. Default is 0. map_func: Callable function to map the dataframe. reduce_func: Callable function to reduce the dataframe. If none, then apply map_func twice. ...
Counts the number of non-NaN objects for each column or row. Return: A new QueryCompiler object containing counts of non-NaN objects from each column or row. def count(self, **kwargs): """Counts the number of non-NaN objects for each column or row. Return: ...
Returns the mean for each numerical column or row. Return: A new QueryCompiler object containing the mean from each numerical column or row. def mean(self, **kwargs): """Returns the mean for each numerical column or row. Return: A new QueryCompiler object c...
Returns the minimum from each column or row. Return: A new QueryCompiler object with the minimum value from each column or row. def min(self, **kwargs): """Returns the minimum from each column or row. Return: A new QueryCompiler object with the minimum value from each ...
Calculates the sum or product of the DataFrame. Args: func: Pandas func to apply to DataFrame. ignore_axis: Whether to ignore axis when raising TypeError Return: A new QueryCompiler object with sum or prod of the object. def _process_sum_prod(self, func, **kwargs): ...
Returns the product of each numerical column or row. Return: A new QueryCompiler object with the product of each numerical column or row. def prod(self, **kwargs): """Returns the product of each numerical column or row. Return: A new QueryCompiler object with the produ...
Calculates if any or all the values are true. Return: A new QueryCompiler object containing boolean values or boolean. def _process_all_any(self, func, **kwargs): """Calculates if any or all the values are true. Return: A new QueryCompiler object containing boolean val...
Returns whether all the elements are true, potentially over an axis. Return: A new QueryCompiler object containing boolean values or boolean. def all(self, **kwargs): """Returns whether all the elements are true, potentially over an axis. Return: A new QueryCompiler ob...
Converts columns dtypes to given dtypes. Args: col_dtypes: Dictionary of {col: dtype,...} where col is the column name and dtype is a numpy dtype. Returns: DataFrame with updated dtypes. def astype(self, col_dtypes, **kwargs): """Converts columns dtypes...
Applies map that reduce Manager to series but require knowledge of full axis. Args: func: Function to reduce the Manager by. This function takes in a Manager. axis: axis to apply the function to. alternate_index: If the resulting series should have an index d...
Returns index of first non-NaN/NULL value. Return: Scalar of index name. def first_valid_index(self): """Returns index of first non-NaN/NULL value. Return: Scalar of index name. """ # It may be possible to incrementally check each partition, but this ...
Returns the first occurrence of the maximum over requested axis. Returns: A new QueryCompiler object containing the maximum of each column or axis. def idxmax(self, **kwargs): """Returns the first occurrence of the maximum over requested axis. Returns: A new QueryCompi...
Returns the first occurrence of the minimum over requested axis. Returns: A new QueryCompiler object containing the minimum of each column or axis. def idxmin(self, **kwargs): """Returns the first occurrence of the minimum over requested axis. Returns: A new QueryCompi...
Returns index of last non-NaN/NULL value. Return: Scalar of index name. def last_valid_index(self): """Returns index of last non-NaN/NULL value. Return: Scalar of index name. """ def last_valid_index_builder(df): df.index = pandas.RangeInde...
Returns median of each column or row. Returns: A new QueryCompiler object containing the median of each column or row. def median(self, **kwargs): """Returns median of each column or row. Returns: A new QueryCompiler object containing the median of each column or row. ...
Returns the memory usage of each column. Returns: A new QueryCompiler object containing the memory usage of each column. def memory_usage(self, **kwargs): """Returns the memory usage of each column. Returns: A new QueryCompiler object containing the memory usage of eac...
Returns quantile of each column or row. Returns: A new QueryCompiler object containing the quantile of each column or row. def quantile_for_single_value(self, **kwargs): """Returns quantile of each column or row. Returns: A new QueryCompiler object containing the quant...
Reduce Manger along select indices using function that needs full axis. Args: func: Callable that reduces the dimension of the object and requires full knowledge of the entire axis. axis: 0 for columns and 1 for rows. Defaults to 0. index: Index of the result...
Generates descriptive statistics. Returns: DataFrame object containing the descriptive statistics of the DataFrame. def describe(self, **kwargs): """Generates descriptive statistics. Returns: DataFrame object containing the descriptive statistics of the DataFrame. ...
Returns a new QueryCompiler with null values dropped along given axis. Return: a new DataManager def dropna(self, **kwargs): """Returns a new QueryCompiler with null values dropped along given axis. Return: a new DataManager """ axis = kwargs.get("axis", ...
Returns a new QueryCompiler with expr evaluated on columns. Args: expr: The string expression to evaluate. Returns: A new QueryCompiler with new columns after applying expr. def eval(self, expr, **kwargs): """Returns a new QueryCompiler with expr evaluated on columns. ...
Returns a new QueryCompiler with modes calculated for each label along given axis. Returns: A new QueryCompiler with modes calculated. def mode(self, **kwargs): """Returns a new QueryCompiler with modes calculated for each label along given axis. Returns: A new QueryCo...
Replaces NaN values with the method provided. Returns: A new QueryCompiler with null values filled. def fillna(self, **kwargs): """Replaces NaN values with the method provided. Returns: A new QueryCompiler with null values filled. """ axis = kwargs.get(...
Query columns of the DataManager with a boolean expression. Args: expr: Boolean expression to query the columns with. Returns: DataManager containing the rows where the boolean expression is satisfied. def query(self, expr, **kwargs): """Query columns of the DataManage...
Computes numerical rank along axis. Equal values are set to the average. Returns: DataManager containing the ranks of the values along an axis. def rank(self, **kwargs): """Computes numerical rank along axis. Equal values are set to the average. Returns: DataManager co...
Sorts the data with respect to either the columns or the indices. Returns: DataManager containing the data sorted by columns or indices. def sort_index(self, **kwargs): """Sorts the data with respect to either the columns or the indices. Returns: DataManager containing...
Maps function to select indices along full axis. Args: axis: 0 for columns and 1 for rows. func: Callable mapping function over the BlockParitions. indices: indices along axis to map over. keep_remaining: True if keep indices where function was not applied. ...
Returns Manager containing quantiles along an axis for numeric columns. Returns: DataManager containing quantiles of original DataManager along an axis. def quantile_for_list_of_values(self, **kwargs): """Returns Manager containing quantiles along an axis for numeric columns. Retu...
Returns the last n rows. Args: n: Integer containing the number of rows to return. Returns: DataManager containing the last n rows of the original DataManager. def tail(self, n): """Returns the last n rows. Args: n: Integer containing the number of...
Returns the first n columns. Args: n: Integer containing the number of columns to return. Returns: DataManager containing the first n columns of the original DataManager. def front(self, n): """Returns the first n columns. Args: n: Integer containi...
Get column data for target labels. Args: key: Target labels by which to retrieve data. Returns: A new QueryCompiler. def getitem_column_array(self, key): """Get column data for target labels. Args: key: Target labels by which to retrieve data. ...
Get row data for target labels. Args: key: Target numeric indices by which to retrieve data. Returns: A new QueryCompiler. def getitem_row_array(self, key): """Get row data for target labels. Args: key: Target numeric indices by which to retrieve d...
Set the column defined by `key` to the `value` provided. Args: key: The column name to set. value: The value to set the column to. Returns: A new QueryCompiler def setitem(self, axis, key, value): """Set the column defined by `key` to the `value` provided....
Remove row data for target index and columns. Args: index: Target index to drop. columns: Target columns to drop. Returns: A new QueryCompiler. def drop(self, index=None, columns=None): """Remove row data for target index and columns. Args: ...
Insert new column data. Args: loc: Insertion index. column: Column labels to insert. value: Dtype object values to insert. Returns: A new PandasQueryCompiler with new data inserted. def insert(self, loc, column, value): """Insert new column data...
Apply func across given axis. Args: func: The function to apply. axis: Target axis to apply the function along. Returns: A new PandasQueryCompiler. def apply(self, func, axis, *args, **kwargs): """Apply func across given axis. Args: fun...
Recompute the index after applying function. Args: result_data: a BaseFrameManager object. axis: Target axis along which function was applied. Returns: A new PandasQueryCompiler. def _post_process_apply(self, result_data, axis, try_scale=True): """Recompute...
Apply function to certain indices across given axis. Args: func: The function to apply. axis: Target axis to apply the function along. Returns: A new PandasQueryCompiler. def _dict_func(self, func, axis, *args, **kwargs): """Apply function to certain indice...
Apply list-like function across given axis. Args: func: The function to apply. axis: Target axis to apply the function along. Returns: A new PandasQueryCompiler. def _list_like_func(self, func, axis, *args, **kwargs): """Apply list-like function across give...
Apply callable functions across given axis. Args: func: The functions to apply. axis: Target axis to apply the function along. Returns: A new PandasQueryCompiler. def _callable_func(self, func, axis, *args, **kwargs): """Apply callable functions across give...
This method applies all manual partitioning functions. Args: axis: The axis to shuffle data along. repartition_func: The function used to repartition data. Returns: A `BaseFrameManager` object. def _manual_repartition(self, axis, repartition_func, **kwargs): ...
Convert categorical variables to dummy variables for certain columns. Args: columns: The columns to convert. Returns: A new QueryCompiler. def get_dummies(self, columns, **kwargs): """Convert categorical variables to dummy variables for certain columns. Args: ...
Note: this function involves making copies of the index in memory. Args: axis: Axis to extract indices. indices: Indices to convert to numerical. Returns: An Index object. def global_idx_to_numeric_idx(self, axis, indices): """ Note: this function i...
Perform the map step Returns: A BaseFrameManager object. def _get_data(self) -> BaseFrameManager: """Perform the map step Returns: A BaseFrameManager object. """ def iloc(partition, row_internal_indices, col_internal_indices): return partit...
Gets the lengths of the blocks. Note: This works with the property structure `_lengths_cache` to avoid having to recompute these values each time they are needed. def block_lengths(self): """Gets the lengths of the blocks. Note: This works with the property structure `_lengths_cac...
Gets the widths of the blocks. Note: This works with the property structure `_widths_cache` to avoid having to recompute these values each time they are needed. def block_widths(self): """Gets the widths of the blocks. Note: This works with the property structure `_widths_cache` t...
Updates the current DataFrame inplace. Args: new_query_compiler: The new QueryCompiler to use to manage the data def _update_inplace(self, new_query_compiler): """Updates the current DataFrame inplace. Args: new_query_compiler: The new QueryCompiler to use to ma...
Helper method to check validity of other in inter-df operations def _validate_other( self, other, axis, numeric_only=False, numeric_or_time_only=False, numeric_or_object_only=False, comparison_dtypes_only=False, ): """Helper method to check v...
Helper method to use default pandas function def _default_to_pandas(self, op, *args, **kwargs): """Helper method to use default pandas function""" empty_self_str = "" if not self.empty else " for empty DataFrame" ErrorMessage.default_to_pandas( "`{}.{}`{}".format( ...
Apply an absolute value function to all numeric columns. Returns: A new DataFrame with the applied absolute value. def abs(self): """Apply an absolute value function to all numeric columns. Returns: A new DataFrame with the applied absolute value. """ ...
Add this DataFrame to another or a scalar/list. Args: other: What to add this this DataFrame. axis: The axis to apply addition over. Only applicaable to Series or list 'other'. level: A level in the multilevel axis to add over. fill_value: ...