text stringlengths 81 112k |
|---|
Return whether all elements are True over requested axis
Note:
If axis=None or axis=0, this call applies df.all(axis=1)
to the transpose of df.
def all(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs):
"""Return whether all elements are True over reques... |
Apply a function along input axis of DataFrame.
Args:
func: The function to apply
axis: The axis over which to apply the func.
broadcast: Whether or not to broadcast.
raw: Whether or not to convert to a Series.
reduce: Whether or not to try to ... |
Synonym for DataFrame.fillna(method='bfill')
def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
"""Synonym for DataFrame.fillna(method='bfill')"""
return self.fillna(
method="bfill", axis=axis, limit=limit, downcast=downcast, inplace=inplace
) |
Return the bool of a single element PandasObject.
This must be a boolean scalar value, either True or False. Raise a
ValueError if the PandasObject does not have exactly 1 element, or that
element is not boolean
def bool(self):
"""Return the bool of a single element PandasObject.... |
Creates a shallow copy of the DataFrame.
Returns:
A new DataFrame pointing to the same partitions as this one.
def copy(self, deep=True):
"""Creates a shallow copy of the DataFrame.
Returns:
A new DataFrame pointing to the same partitions as this one.
"... |
Get the count of non-null objects in the DataFrame.
Arguments:
axis: 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
level: If the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a DataFrame.
numeric_only:... |
Perform a cumulative maximum across the DataFrame.
Args:
axis (int): The axis to take maximum on.
skipna (bool): True to skip NA values, false otherwise.
Returns:
The cumulative maximum of the DataFrame.
def cummax(self, axis=None, skipna=True, *args, **kwar... |
Perform a cumulative product across the DataFrame.
Args:
axis (int): The axis to take product on.
skipna (bool): True to skip NA values, false otherwise.
Returns:
The cumulative product of the DataFrame.
def cumprod(self, axis=None, skipna=True, *args, **kwa... |
Generates descriptive statistics that summarize the central tendency,
dispersion and shape of a dataset's distribution, excluding NaN values.
Args:
percentiles (list-like of numbers, optional):
The percentiles to include in the output.
include: White-list o... |
Finds the difference between elements on the axis requested
Args:
periods: Periods to shift for forming difference
axis: Take difference over rows or columns
Returns:
DataFrame with the diff applied
def diff(self, periods=1, axis=0):
"""Finds the di... |
Return new object with labels in requested axis removed.
Args:
labels: Index or column labels to drop.
axis: Whether to drop labels from the index (0 / 'index') or
columns (1 / 'columns').
index, columns: Alternative to specifying axis (labels, axis=1 is
... |
Create a new DataFrame from the removed NA values from this one.
Args:
axis (int, tuple, or list): The axis to apply the drop.
how (str): How to drop the NA values.
'all': drop the label if all values are NA.
'any': drop the label if any values are ... |
Return DataFrame with duplicate rows removed, optionally only considering certain columns
Args:
subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by
default use all of the columns
... |
Checks element-wise that this is equal to other.
Args:
other: A DataFrame or Series or scalar to compare to.
axis: The axis to perform the eq over.
level: The Multilevel index level to apply eq over.
Returns:
A new DataFrame filled with Booleans.... |
Fill NA/NaN values using the specified method.
Args:
value: Value to use to fill holes. This value cannot be a list.
method: Method to use for filling holes in reindexed Series pad.
ffill: propagate last valid observation forward to next valid
bac... |
Subset rows or columns based on their labels
Args:
items (list): list of labels to subset
like (string): retain labels where `arg in label == True`
regex (string): retain labels matching regex input
axis: axis to filter on
Returns:
A... |
Divides this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the divide against this.
axis: The axis to divide over.
level: The Multilevel index level to apply divide over.
fill_value: The value to fill NaNs with.
... |
Checks element-wise that this is greater than or equal to other.
Args:
other: A DataFrame or Series or scalar to compare to.
axis: The axis to perform the gt over.
level: The Multilevel index level to apply gt over.
Returns:
A new DataFrame fille... |
Get the counts of dtypes in this object.
Returns:
The counts of dtypes in this object.
def get_dtype_counts(self):
"""Get the counts of dtypes in this object.
Returns:
The counts of dtypes in this object.
"""
if hasattr(self, "dtype"):
... |
Get the counts of ftypes in this object.
Returns:
The counts of ftypes in this object.
def get_ftype_counts(self):
"""Get the counts of ftypes in this object.
Returns:
The counts of ftypes in this object.
"""
if hasattr(self, "ftype"):
... |
Checks element-wise that this is greater than other.
Args:
other: A DataFrame or Series or scalar to compare to.
axis: The axis to perform the gt over.
level: The Multilevel index level to apply gt over.
Returns:
A new DataFrame filled with Boole... |
Get the first n rows of the DataFrame.
Args:
n (int): The number of rows to return.
Returns:
A new DataFrame with the first n rows of the DataFrame.
def head(self, n=5):
"""Get the first n rows of the DataFrame.
Args:
n (int): The number ... |
Get the index of the first occurrence of the max value of the axis.
Args:
axis (int): Identify the max over the rows (1) or columns (0).
skipna (bool): Whether or not to skip NA values.
Returns:
A Series with the index for each maximum value for the axis
... |
Fill a DataFrame with booleans for cells contained in values.
Args:
values (iterable, DataFrame, Series, or dict): The values to find.
Returns:
A new DataFrame with booleans representing whether or not a cell
is in values.
True: cell is contained... |
Checks element-wise that this is less than or equal to other.
Args:
other: A DataFrame or Series or scalar to compare to.
axis: The axis to perform the le over.
level: The Multilevel index level to apply le over.
Returns:
A new DataFrame filled w... |
Checks element-wise that this is less than other.
Args:
other: A DataFrame or Series or scalar to compare to.
axis: The axis to perform the lt over.
level: The Multilevel index level to apply lt over.
Returns:
A new DataFrame filled with Booleans... |
Computes mean across the DataFrame.
Args:
axis (int): The axis to take the mean on.
skipna (bool): True to skip NA values, false otherwise.
Returns:
The mean of the DataFrame. (Pandas series)
def mean(self, axis=None, skipna=None, level=None, numeric_only=No... |
Computes median across the DataFrame.
Args:
axis (int): The axis to take the median on.
skipna (bool): True to skip NA values, false otherwise.
Returns:
The median of the DataFrame. (Pandas series)
def median(self, axis=None, skipna=None, level=None, numeric... |
Returns the memory usage of each column in bytes
Args:
index (bool): Whether to include the memory usage of the DataFrame's
index in returned Series. Defaults to True
deep (bool): If True, introspect the data deeply by interrogating
objects dtypes for s... |
Perform min across the DataFrame.
Args:
axis (int): The axis to take the min on.
skipna (bool): True to skip NA values, false otherwise.
Returns:
The min of the DataFrame.
def min(self, axis=None, skipna=None, level=None, numeric_only=None, **kwargs):
... |
Mods this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the mod against this.
axis: The axis to mod over.
level: The Multilevel index level to apply mod over.
fill_value: The value to fill NaNs with.
R... |
Perform mode across the DataFrame.
Args:
axis (int): The axis to take the mode on.
numeric_only (bool): if True, only apply to numeric columns.
Returns:
DataFrame: The mode of the DataFrame.
def mode(self, axis=0, numeric_only=False, dropna=True):
"... |
Multiplies this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the multiply against this.
axis: The axis to multiply over.
level: The Multilevel index level to apply multiply over.
fill_value: The value to fill Na... |
Checks element-wise that this is not equal to other.
Args:
other: A DataFrame or Series or scalar to compare to.
axis: The axis to perform the ne over.
level: The Multilevel index level to apply ne over.
Returns:
A new DataFrame filled with Boole... |
Return Series with number of distinct
observations over requested axis.
Args:
axis : {0 or 'index', 1 or 'columns'}, default 0
dropna : boolean, default True
Returns:
nunique : Series
def nunique(self, axis=0, dropna=True):
"""Return Ser... |
Pow this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the pow against this.
axis: The axis to pow over.
level: The Multilevel index level to apply pow over.
fill_value: The value to fill NaNs with.
Re... |
Return the product of the values for the requested axis
Args:
axis : {index (0), columns (1)}
skipna : boolean, default True
level : int or level name, default None
numeric_only : boolean, default None
min_count : int, default 0
Retu... |
Return values at the given quantile over requested axis,
a la numpy.percentile.
Args:
q (float): 0 <= q <= 1, the quantile(s) to compute
axis (int): 0 or 'index' for row-wise,
1 or 'columns' for column-wise
interpolation: {'linear',... |
Compute numerical data ranks (1 through n) along axis.
Equal values are assigned a rank that is the [method] of
the ranks of those values.
Args:
axis (int): 0 or 'index' for row-wise,
1 or 'columns' for column-wise
method: {'average', 'min'... |
Reset this index to default and create column from current index.
Args:
level: Only remove the given levels from the index. Removes all
levels by default
drop: Do not try to insert index into DataFrame columns. This
resets the index to the default i... |
Mod this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the div against this.
axis: The axis to div over.
level: The Multilevel index level to apply div over.
fill_value: The value to fill NaNs with.
Re... |
Round each element in the DataFrame.
Args:
decimals: The number of decimals to round to.
Returns:
A new DataFrame.
def round(self, decimals=0, *args, **kwargs):
"""Round each element in the DataFrame.
Args:
decimals: The number of decima... |
Pow this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the pow against this.
axis: The axis to pow over.
level: The Multilevel index level to apply pow over.
fill_value: The value to fill NaNs with.
Re... |
Subtract a DataFrame/Series/scalar from this DataFrame.
Args:
other: The object to use to apply the subtraction to this.
axis: The axis to apply the subtraction over.
level: Mutlilevel index level to subtract over.
fill_value: The value to fill NaNs with.
... |
Div this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the div against this.
axis: The axis to div over.
level: The Multilevel index level to apply div over.
fill_value: The value to fill NaNs with.
Re... |
Returns a random sample of items from an axis of object.
Args:
n: Number of items from axis to return. Cannot be used with frac.
Default = 1 if frac = None.
frac: Fraction of axis items to return. Cannot be used with n.
replace: Sample with or without r... |
Assign desired index to given axis.
Args:
labels (pandas.Index or list-like): The Index to assign.
axis (string or int): The axis to reassign.
inplace (bool): Whether to make these modifications inplace.
Returns:
If inplace is False, returns a ne... |
Sort a DataFrame by one of the indices (columns or index).
Args:
axis: The axis to sort over.
level: The MultiIndex level to sort over.
ascending: Ascending or descending
inplace: Whether or not to update this DataFrame inplace.
kind: How to pe... |
Sorts by a column/row or list of columns/rows.
Args:
by: A list of labels for the axis to sort over.
axis: The axis to sort.
ascending: Sort in ascending or descending order.
inplace: If true, do the operation inplace.
kind: How to sort.
... |
Subtract a DataFrame/Series/scalar from this DataFrame.
Args:
other: The object to use to apply the subtraction to this.
axis: The axis to apply the subtraction over.
level: Mutlilevel index level to subtract over.
fill_value: The value to fill NaNs with.
... |
Convert the DataFrame to a NumPy array.
Args:
dtype: The dtype to pass to numpy.asarray()
copy: Whether to ensure that the returned value is a not a view on another
array.
Returns:
A numpy array.
def to_numpy(self, dtype=None, copy=False):
... |
Divides this DataFrame against another DataFrame/Series/scalar.
Args:
other: The object to use to apply the divide against this.
axis: The axis to divide over.
level: The Multilevel index level to apply divide over.
fill_value: The value to fill NaNs with.
... |
Computes variance across the DataFrame.
Args:
axis (int): The axis to take the variance on.
skipna (bool): True to skip NA values, false otherwise.
ddof (int): degrees of freedom
Returns:
The variance of the DataFrame.
def var(
self, ax... |
Get the number of elements in the DataFrame.
Returns:
The number of elements in the DataFrame.
def size(self):
"""Get the number of elements in the DataFrame.
Returns:
The number of elements in the DataFrame.
"""
return len(self._query_compiler... |
Flushes the call_queue and returns the data.
Note: Since this object is a simple wrapper, just return the data.
Returns:
The object that was `put`.
def get(self):
"""Flushes the call_queue and returns the data.
Note: Since this object is a simple wrapper, just return the ... |
Apply some callable function to the data in this partition.
Note: It is up to the implementation how kwargs are handled. They are
an important part of many implementations. As of right now, they
are not serialized.
Args:
func: The lambda to apply (may already be cor... |
Apply some callable function to the data in this partition.
Note: It is up to the implementation how kwargs are handled. They are
an important part of many implementations. As of right now, they
are not serialized.
Args:
func: The lambda to apply (may already be cor... |
Add the function to the apply function call stack.
This function will be executed when apply is called. It will be executed
in the order inserted; apply's func operates the last and return
def add_to_apply_calls(self, func, **kwargs):
"""Add the function to the apply function call stack.
... |
Use a Ray task to read a chunk of a CSV into a pyarrow Table.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
Args:
fname: The filename of the file to open.
num_splits: The number of splits (partitions) to separate the DataFrame into.
start: The start byte offse... |
Computes the number of rows and/or columns to include in each partition.
Args:
df: The DataFrame to split.
num_splits: The maximum number of splits to separate the DataFrame into.
default_block_size: Minimum number of rows/columns (default set to 32x32).
axis: The axis to split. (0:... |
A memory efficient way to get a block of NaNs.
Args:
partition_class (BaseFramePartition): The class to use to put the object
in the remote format.
n_row(int): The number of rows.
n_col(int): The number of columns.
transpose(bool): If true, swap rows and columns.
Ret... |
Split the Pandas result evenly based on the provided number of splits.
Args:
axis: The axis to split across.
num_splits: The number of even splits to create.
result: The result of the computation. This should be a Pandas
DataFrame.
length_list: The list of lengths to spl... |
Unpack the user input for getitem and setitem and compute ndim
loc[a] -> ([a], :), 1D
loc[[a,b],] -> ([a,b], :),
loc[a,b] -> ([a], [b]), 0D
def _parse_tuple(tup):
"""Unpack the user input for getitem and setitem and compute ndim
loc[a] -> ([a], :), 1D
loc[[a,b],] -> ([a,b], :),
loc[a,b] -... |
Determine if a locator will enlarge the global index.
Enlargement happens when you trying to locate using labels isn't in the
original index. In other words, enlargement == adding NaNs !
def _is_enlargement(locator, global_index):
"""Determine if a locator will enlarge the global index.
Enlargement h... |
Compute the ndim of result from locators
def _compute_ndim(row_loc, col_loc):
"""Compute the ndim of result from locators
"""
row_scaler = is_scalar(row_loc)
col_scaler = is_scalar(col_loc)
if row_scaler and col_scaler:
ndim = 0
elif row_scaler ^ col_scaler:
ndim = 1
else:
... |
Use numpy to broadcast or reshape item.
Notes:
- Numpy is memory efficient, there shouldn't be performance issue.
def _broadcast_item(self, row_lookup, col_lookup, item, to_shape):
"""Use numpy to broadcast or reshape item.
Notes:
- Numpy is memory efficient, there sho... |
Perform remote write and replace blocks.
def _write_items(self, row_lookup, col_lookup, item):
"""Perform remote write and replace blocks.
"""
self.qc.write_items(row_lookup, col_lookup, item) |
Handle Enlargement (if there is one).
Returns:
None
def _handle_enlargement(self, row_loc, col_loc):
"""Handle Enlargement (if there is one).
Returns:
None
"""
if _is_enlargement(row_loc, self.qc.index) or _is_enlargement(
col_loc, self.qc.c... |
Helper for _enlarge_axis, compute common labels and extra labels.
Returns:
nan_labels: The labels needs to be added
def _compute_enlarge_labels(self, locator, base_index):
"""Helper for _enlarge_axis, compute common labels and extra labels.
Returns:
nan_labels: The l... |
Splits the DataFrame read into smaller DataFrames and handles all edge cases.
Args:
axis: Which axis to split over.
num_splits: The number of splits to create.
df: The DataFrame after it has been read.
Returns:
A list of pandas DataFrames.
def _split_result_for_readers(axis, n... |
Use a Ray task to read columns from Parquet into a Pandas DataFrame.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
Args:
path: The path of the Parquet file.
columns: The list of column names to read.
num_splits: The number of partitions to split the column int... |
Use a Ray task to read a chunk of a CSV into a Pandas DataFrame.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
Args:
fname: The filename of the file to open.
num_splits: The number of splits (partitions) to separate the DataFrame into.
start: The start byte of... |
Use a Ray task to read columns from HDF5 into a Pandas DataFrame.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
Args:
path_or_buf: The path of the HDF5 file.
columns: The list of column names to read.
num_splits: The number of partitions to split the column in... |
Use a Ray task to read columns from Feather into a Pandas DataFrame.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
Args:
path: The path of the Feather file.
columns: The list of column names to read.
num_splits: The number of partitions to split the column int... |
Use a Ray task to read a chunk of SQL source.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
def _read_sql_with_limit_offset(
num_splits, sql, con, index_col, kwargs
): # pragma: no cover
"""Use a Ray task to read a chunk of SQL source.
Note: Ray functions are not detected b... |
Get the index from the indices returned by the workers.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
def get_index(index_name, *partition_indices): # pragma: no cover
"""Get the index from the indices returned by the workers.
Note: Ray functions are not detected by codecov (th... |
Load a parquet object from the file path, returning a DataFrame.
Ray DataFrame only supports pyarrow engine for now.
Args:
path: The filepath of the parquet file.
We only support local files for now.
engine: Ray only support pyarrow reader.
... |
Constructs a DataFrame from a CSV file.
Args:
filepath (str): path to the CSV file.
npartitions (int): number of partitions for the DataFrame.
kwargs (dict): args excluding filepath provided to read_csv.
Returns:
DataFrame or Series constructed from CSV ... |
Read csv file from local disk.
Args:
filepath_or_buffer:
The filepath of the csv file.
We only support local files for now.
kwargs: Keyword arguments in pandas.read_csv
def _read(cls, filepath_or_buffer, **kwargs):
"""Read csv file from local ... |
Load a h5 file from the file path or buffer, returning a DataFrame.
Args:
path_or_buf: string, buffer or path object
Path to the file to open, or an open :class:`pandas.HDFStore` object.
kwargs: Pass into pandas.read_hdf function.
Returns:
DataFrame ... |
Read a pandas.DataFrame from Feather format.
Ray DataFrame only supports pyarrow engine for now.
Args:
path: The filepath of the feather file.
We only support local files for now.
multi threading is set to True by default
columns: not support... |
Write records stored in a DataFrame to a SQL database.
Args:
qc: the query compiler of the DF that we want to run to_sql on
kwargs: parameters for pandas.to_sql(**kwargs)
def to_sql(cls, qc, **kwargs):
"""Write records stored in a DataFrame to a SQL database.
Args:
... |
Reads a SQL query or database table into a DataFrame.
Args:
sql: string or SQLAlchemy Selectable (select or text object) SQL query to be
executed or a table name.
con: SQLAlchemy connectable (engine/connection) or database string URI or
DBAPI2 connection (... |
Convert the arg to datetime format. If not Ray DataFrame, this falls
back on pandas.
Args:
errors ('raise' or 'ignore'): If 'ignore', errors are silenced.
Pandas blatantly ignores this argument so we will too.
dayfirst (bool): Date format is passed in as day first.
yearfi... |
Read SQL query or database table into a DataFrame.
Args:
sql: string or SQLAlchemy Selectable (select or text object) SQL query to be executed or a table name.
con: SQLAlchemy connectable (engine/connection) or database string URI or DBAPI2 connection (fallback mode)
index_col: Column(s) to... |
Gets the lengths of the blocks.
Note: This works with the property structure `_lengths_cache` to avoid
having to recompute these values each time they are needed.
def block_lengths(self):
"""Gets the lengths of the blocks.
Note: This works with the property structure `_lengths_cac... |
Gets the widths of the blocks.
Note: This works with the property structure `_widths_cache` to avoid
having to recompute these values each time they are needed.
def block_widths(self):
"""Gets the widths of the blocks.
Note: This works with the property structure `_widths_cache` t... |
Deploy a function to a partition in Ray.
Note: Ray functions are not detected by codecov (thus pragma: no cover)
Args:
func: The function to apply.
partition: The partition to apply the function to.
kwargs: A dictionary of keyword arguments for the function.
Returns:
The r... |
Gets the object out of the plasma store.
Returns:
The object from the plasma store.
def get(self):
"""Gets the object out of the plasma store.
Returns:
The object from the plasma store.
"""
if len(self.call_queue):
return self.apply(lambda x... |
Gets the lengths of the blocks.
Note: This works with the property structure `_lengths_cache` to avoid
having to recompute these values each time they are needed.
def block_lengths(self):
"""Gets the lengths of the blocks.
Note: This works with the property structure `_lengths_cac... |
Gets the widths of the blocks.
Note: This works with the property structure `_widths_cache` to avoid
having to recompute these values each time they are needed.
def block_widths(self):
"""Gets the widths of the blocks.
Note: This works with the property structure `_widths_cache` t... |
Applies `map_func` to every partition.
Args:
map_func: The function to apply.
Returns:
A new BaseFrameManager object, the type of object that called this.
def map_across_blocks(self, map_func):
"""Applies `map_func` to every partition.
Args:
map_fu... |
Copartition two BlockPartitions objects.
Args:
axis: The axis to copartition.
other: The other BlockPartitions object to copartition with.
left_func: The function to apply to left. If None, just use the dimension
of self (based on axis).
right_fun... |
Applies `map_func` to every partition.
Note: This method should be used in the case that `map_func` relies on
some global information about the axis.
Args:
axis: The axis to perform the map across (0 - index, 1 - columns).
map_func: The function to apply.
R... |
Take the first (or last) n rows or columns from the blocks
Note: Axis = 0 will be equivalent to `head` or `tail`
Axis = 1 will be equivalent to `front` or `back`
Args:
axis: The axis to extract (0 for extracting rows, 1 for extracting columns)
n: The number of row... |
Concatenate the blocks with another set of blocks.
Note: Assumes that the blocks are already the same shape on the
dimension being concatenated. A ValueError will be thrown if this
condition is not met.
Args:
axis: The axis to concatenate to.
other_block... |
Convert this object into a Pandas DataFrame from the partitions.
Args:
is_transposed: A flag for telling this object that the external
representation is transposed, but not the internal.
Returns:
A Pandas DataFrame
def to_pandas(self, is_transposed=False):
... |
This gets the internal indices stored in the partitions.
Note: These are the global indices of the object. This is mostly useful
when you have deleted rows/columns internally, but do not know
which ones were deleted.
Args:
axis: This axis to extract the labels. (0 -... |
Convert a global index to a block index and local index.
Note: This method is primarily used to convert a global index into a
partition index (along the axis provided) and local index (useful
for `iloc` or similar operations.
Args:
axis: The axis along which to get ... |
Convert indices to a dict of block index to internal index mapping.
Note: See `_get_blocks_containing_index` for primary usage. This method
accepts a list of indices rather than just a single value, and uses
`_get_blocks_containing_index`.
Args:
axis: The axis along... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.