text stringlengths 81 112k |
|---|
Index this Variable with -1 remapped to fill_value.
def _getitem_with_mask(self, key, fill_value=dtypes.NA):
"""Index this Variable with -1 remapped to fill_value."""
# TODO(shoyer): expose this method in public API somewhere (isel?) and
# use it for reindex.
# TODO(shoyer): add a sanit... |
Returns a copy of this object.
If `deep=True`, the data array is loaded into memory and copied onto
the new object. Dimensions, attributes and encodings are always copied.
Use `data` to create a new object with the same structure as
original but entirely new data.
Parameters
... |
Coerce this array's data into a dask arrays with the given chunks.
If this variable is a non-dask array, it will be converted to dask
array. If it's a dask array, it will be rechunked to the given chunk
sizes.
If neither chunks is not provided for one or more dimensions, chunk
... |
Return a new array indexed along the specified dimension(s).
Parameters
----------
**indexers : {dim: indexer, ...}
Keyword arguments with names matching dimensions and values given
by integers, slice objects or arrays.
Returns
-------
obj : Arra... |
Return a new object with squeezed data.
Parameters
----------
dim : None or str or tuple of str, optional
Selects a subset of the length one dimensions. If a dimension is
selected with length greater than one, an error is raised. If
None, all length one dimen... |
Return a new Variable with shifted data.
Parameters
----------
shifts : mapping of the form {dim: offset}
Integer offset to shift along each of the given dimensions.
Positive offsets shift to the right; negative offsets shift to the
left.
fill_value: ... |
Return a new Variable with paddings.
Parameters
----------
pad_width: Mapping of the form {dim: (before, after)}
Number of values padded to the edges of each dimension.
**pad_widths_kwargs:
Keyword argument for pad_widths
def pad_with_fill_value(self, pad_widths... |
Return a new Variable with rolld data.
Parameters
----------
shifts : mapping of the form {dim: offset}
Integer offset to roll along each of the given dimensions.
Positive offsets roll to the right; negative offsets roll to the
left.
**shifts_kwargs:
... |
Return a new Variable object with transposed dimensions.
Parameters
----------
*dims : str, optional
By default, reverse the dimensions. Otherwise, reorder the
dimensions to this order.
Returns
-------
transposed : Variable
The return... |
Return a new variable with given set of dimensions.
This method might be used to attach new dimension(s) to variable.
When possible, this operation does not copy this variable's data.
Parameters
----------
dims : str or sequence of str or dict
Dimensions to include ... |
Stack any number of existing dimensions into a single new dimension.
New dimensions will be added at the end, and the order of the data
along each new dimension will be in contiguous (C) order.
Parameters
----------
dimensions : Mapping of form new_name=(dim1, dim2, ...)
... |
Unstack an existing dimension into multiple new dimensions.
New dimensions will be added at the end, and the order of the data
along each new dimension will be in contiguous (C) order.
Parameters
----------
dimensions : mapping of the form old_dim={dim1: size1, ...}
... |
Reduce this array by applying `func` along some dimension(s).
Parameters
----------
func : function
Function which can be called in the form
`func(x, axis=axis, **kwargs)` to return the result of reducing an
np.ndarray over an integer valued axis.
dim... |
Concatenate variables along a new or existing dimension.
Parameters
----------
variables : iterable of Array
Arrays to stack together. Each variable is expected to have
matching dimensions and shape except for along the stacked
dimension.
dim : str or... |
True if two Variables have the same dimensions and values;
otherwise False.
Variables can still be equal (like pandas objects) if they have NaN
values in the same locations.
This method is necessary because `v1 == v2` for Variables
does element-wise comparisons (like numpy.ndar... |
True if two Variables have the values after being broadcast against
each other; otherwise False.
Variables can still be equal (like pandas objects) if they have NaN
values in the same locations.
def broadcast_equals(self, other, equiv=duck_array_ops.array_equiv):
"""True if two Variabl... |
Like equals, but also checks attributes.
def identical(self, other):
"""Like equals, but also checks attributes.
"""
try:
return (utils.dict_equiv(self.attrs, other.attrs) and
self.equals(other))
except (TypeError, AttributeError):
return Fals... |
Compute the qth quantile of the data along the specified dimension.
Returns the qth quantiles(s) of the array elements.
Parameters
----------
q : float in range of [0,1] (or sequence of floats)
Quantile to compute, which must be between 0 and 1
inclusive.
... |
Ranks the data.
Equal values are assigned a rank that is the average of the ranks that
would have been otherwise assigned to all of the values within that
set. Ranks begin at 1, not 0. If `pct`, computes percentage ranks.
NaNs in the input array are returned as NaNs.
The `bot... |
Make a rolling_window along dim and add a new_dim to the last place.
Parameters
----------
dim: str
Dimension over which to compute rolling_window
window: int
Window size of the rolling
window_dim: str
New name of the window dimension.
... |
Apply
def coarsen(self, windows, func, boundary='exact', side='left'):
"""
Apply
"""
windows = {k: v for k, v in windows.items() if k in self.dims}
if not windows:
return self.copy()
reshaped, axes = self._coarsen_reshape(windows, boundary, side)
if ... |
Construct a reshaped-array for corsen
def _coarsen_reshape(self, windows, boundary, side):
"""
Construct a reshaped-array for corsen
"""
if not utils.is_dict_like(boundary):
boundary = {d: boundary for d in windows.keys()}
if not utils.is_dict_like(side):
... |
A (private) method to convert datetime array to numeric dtype
See duck_array_ops.datetime_to_numeric
def _to_numeric(self, offset=None, datetime_unit=None, dtype=float):
""" A (private) method to convert datetime array to numeric dtype
See duck_array_ops.datetime_to_numeric
"""
... |
Specialized version of Variable.concat for IndexVariable objects.
This exists because we want to avoid converting Index objects to NumPy
arrays, if possible.
def concat(cls, variables, dim='concat_dim', positions=None,
shortcut=False):
"""Specialized version of Variable.concat f... |
Returns a copy of this object.
`deep` is ignored since data is stored in the form of
pandas.Index, which is already immutable. Dimensions, attributes
and encodings are always copied.
Use `data` to create a new object with the same structure as
original but entirely new data.
... |
Convert this variable to a pandas.Index
def to_index(self):
"""Convert this variable to a pandas.Index"""
# n.b. creating a new pandas.Index from an old pandas.Index is
# basically free as pandas.Index objects are immutable
assert self.ndim == 1
index = self._data.array
... |
Return MultiIndex level names or None if this IndexVariable has no
MultiIndex.
def level_names(self):
"""Return MultiIndex level names or None if this IndexVariable has no
MultiIndex.
"""
index = self.to_index()
if isinstance(index, pd.MultiIndex):
return ind... |
Return a new IndexVariable from a given MultiIndex level.
def get_level_variable(self, level):
"""Return a new IndexVariable from a given MultiIndex level."""
if self.level_names is None:
raise ValueError("IndexVariable %r has no MultiIndex" % self.name)
index = self.to_index()
... |
Generalization of
pandas.tseries.index.DatetimeIndex._parsed_string_to_bounds
for use with non-standard calendars and cftime.datetime
objects.
def _parsed_string_to_bounds(date_type, resolution, parsed):
"""Generalization of
pandas.tseries.index.DatetimeIndex._parsed_string_to_bounds
for use wi... |
Adapted from pandas.tslib.get_date_field
def get_date_field(datetimes, field):
"""Adapted from pandas.tslib.get_date_field"""
return np.array([getattr(date, field) for date in datetimes]) |
Adapted from pandas.tseries.index._field_accessor
def _field_accessor(name, docstring=None, min_cftime_version='0.0'):
"""Adapted from pandas.tseries.index._field_accessor"""
def f(self, min_cftime_version=min_cftime_version):
import cftime
version = cftime.__version__
if LooseVersio... |
Create a numpy array from an array of strings.
For use in generating dates from strings for use with interp. Assumes the
array is either 0-dimensional or 1-dimensional.
Parameters
----------
strings : array of strings
Strings to convert to dates
date_type : cftime.datetime type
... |
Adapted from
pandas.tseries.index.DatetimeIndex._partial_date_slice
Note that when using a CFTimeIndex, if a partial-date selection
returns a single element, it will never be converted to a scalar
coordinate; this is in slight contrast to the behavior when using
a DatetimeIndex,... |
Adapted from pandas.tseries.index.DatetimeIndex._get_string_slice
def _get_string_slice(self, key):
"""Adapted from pandas.tseries.index.DatetimeIndex._get_string_slice"""
parsed, resolution = _parse_iso8601_with_reso(self.date_type, key)
try:
loc = self._partial_date_slice(resoluti... |
Adapted from pandas.tseries.index.DatetimeIndex.get_loc
def get_loc(self, key, method=None, tolerance=None):
"""Adapted from pandas.tseries.index.DatetimeIndex.get_loc"""
if isinstance(key, str):
return self._get_string_slice(key)
else:
return pd.Index.get_loc(self, key,... |
Adapted from
pandas.tseries.index.DatetimeIndex._maybe_cast_slice_bound
def _maybe_cast_slice_bound(self, label, side, kind):
"""Adapted from
pandas.tseries.index.DatetimeIndex._maybe_cast_slice_bound"""
if isinstance(label, str):
parsed, resolution = _parse_iso8601_with_res... |
Adapted from pandas.tseries.index.DatetimeIndex.get_value
def get_value(self, series, key):
"""Adapted from pandas.tseries.index.DatetimeIndex.get_value"""
if np.asarray(key).dtype == np.dtype(bool):
return series.iloc[key]
elif isinstance(key, slice):
return series.iloc... |
Shift the CFTimeIndex a multiple of the given frequency.
See the documentation for :py:func:`~xarray.cftime_range` for a
complete listing of valid frequency strings.
Parameters
----------
n : int
Periods to shift by
freq : str or datetime.timedelta
... |
If possible, convert this index to a pandas.DatetimeIndex.
Parameters
----------
unsafe : bool
Flag to turn off warning when converting from a CFTimeIndex with
a non-standard calendar to a DatetimeIndex (default ``False``).
Returns
-------
pandas... |
Create a function that dispatches to dask for dask array inputs.
def _dask_or_eager_func(name, eager_module=np, dask_module=dask_array,
list_of_args=False, array_args=slice(1),
requires_dask=None):
"""Create a function that dispatches to dask for dask array inputs.""... |
Cast a arrays to a shared dtype using xarray's type promotion rules.
def as_shared_dtype(scalars_or_arrays):
"""Cast a arrays to a shared dtype using xarray's type promotion rules."""
arrays = [asarray(x) for x in scalars_or_arrays]
# Pass arrays directly instead of dtypes to result_type so scalars
# g... |
Like np.allclose, but also allows values to be NaN in both arrays
def allclose_or_equiv(arr1, arr2, rtol=1e-5, atol=1e-8):
"""Like np.allclose, but also allows values to be NaN in both arrays
"""
arr1, arr2 = as_like_arrays(arr1, arr2)
if arr1.shape != arr2.shape:
return False
return bool(
... |
Like np.array_equal, but also allows values to be NaN in both arrays
def array_equiv(arr1, arr2):
"""Like np.array_equal, but also allows values to be NaN in both arrays
"""
arr1, arr2 = as_like_arrays(arr1, arr2)
if arr1.shape != arr2.shape:
return False
with warnings.catch_warnings():
... |
Count the number of non-NA in this array along the given axis or axes
def count(data, axis=None):
"""Count the number of non-NA in this array along the given axis or axes
"""
return np.sum(np.logical_not(isnull(data)), axis=axis) |
Convert an array containing datetime-like data to an array of floats.
Parameters
----------
da : np.array
Input data
offset: Scalar with the same type of array or None
If None, subtract minimum values to reduce round off error
datetime_unit: None or any of {'Y', 'M', 'W', 'D', 'h', ... |
inhouse mean that can handle np.datetime64 or cftime.datetime
dtypes
def mean(array, axis=None, skipna=None, **kwargs):
"""inhouse mean that can handle np.datetime64 or cftime.datetime
dtypes"""
from .common import _contains_cftime_datetimes
array = asarray(array)
if array.dtype.kind in 'Mm':
... |
Return the first non-NA elements in this array along the given axis
def first(values, axis, skipna=None):
"""Return the first non-NA elements in this array along the given axis
"""
if (skipna or skipna is None) and values.dtype.kind not in 'iSU':
# only bother for dtypes that can hold NaN
_... |
Return the last non-NA elements in this array along the given axis
def last(values, axis, skipna=None):
"""Return the last non-NA elements in this array along the given axis
"""
if (skipna or skipna is None) and values.dtype.kind not in 'iSU':
# only bother for dtypes that can hold NaN
_fai... |
Make an ndarray with a rolling window of axis-th dimension.
The rolling dimension will be placed at the last dimension.
def rolling_window(array, axis, window, center, fill_value):
"""
Make an ndarray with a rolling window of axis-th dimension.
The rolling dimension will be placed at the last dimension... |
Concatenate xarray objects along a new or existing dimension.
Parameters
----------
objs : sequence of Dataset and DataArray objects
xarray objects to concatenate together. Each object is expected to
consist of variables and coordinates with matching shapes except for
along the conc... |
Infer the dimension name and 1d coordinate variable (if appropriate)
for concatenating along the new dimension.
def _calc_concat_dim_coord(dim):
"""
Infer the dimension name and 1d coordinate variable (if appropriate)
for concatenating along the new dimension.
"""
from .dataarray import DataArr... |
Determine which dataset variables need to be concatenated in the result,
and which can simply be taken from the first dataset.
def _calc_concat_over(datasets, dim, data_vars, coords):
"""
Determine which dataset variables need to be concatenated in the result,
and which can simply be taken from the fir... |
Concatenate a sequence of datasets along a new or existing dimension
def _dataset_concat(datasets, dim, data_vars, coords, compat, positions):
"""
Concatenate a sequence of datasets along a new or existing dimension
"""
from .dataset import Dataset
if compat not in ['equals', 'identical']:
... |
Given a list of lists (of lists...) of objects, returns a iterator
which returns a tuple containing the index of each object in the nested
list structure as the key, and the object. This can then be called by the
dict constructor to create a dictionary of the objects organised by their
position in the o... |
Concatenates and merges an N-dimensional structure of datasets.
No checks are performed on the consistency of the datasets, concat_dims or
tile_IDs, because it is assumed that this has already been done.
Parameters
----------
combined_ids : Dict[Tuple[int, ...]], xarray.Dataset]
Structure ... |
Calls logic to decide concatenation order before concatenating.
def _auto_combine(datasets, concat_dims, compat, data_vars, coords,
infer_order_from_coords, ids):
"""
Calls logic to decide concatenation order before concatenating.
"""
# Arrange datasets for concatenation
if infer... |
Attempt to auto-magically combine the given datasets into one.
This method attempts to combine a list of datasets into a single entity by
inspecting metadata and using a combination of concat and merge.
It does not concatenate along more than one dimension or sort data under
any circumstances. It does a... |
Return the cftime date type for a given calendar name.
def get_date_type(calendar):
"""Return the cftime date type for a given calendar name."""
try:
import cftime
except ImportError:
raise ImportError(
'cftime is required for dates with non-standard calendars')
else:
... |
Find the day in `other`'s month that satisfies a BaseCFTimeOffset's
onOffset policy, as described by the `day_option` argument.
Parameters
----------
other : cftime.datetime
day_option : 'start', 'end'
'start': returns 1
'end': returns last day of the month
Returns
-------
... |
The number of days in the month of the given date
def _days_in_month(date):
"""The number of days in the month of the given date"""
if date.month == 12:
reference = type(date)(date.year + 1, 1, 1)
else:
reference = type(date)(date.year, date.month + 1, 1)
return (reference - timedelta(d... |
Adjust the number of times a monthly offset is applied based
on the day of a given date, and the reference day provided.
def _adjust_n_months(other_day, n, reference_day):
"""Adjust the number of times a monthly offset is applied based
on the day of a given date, and the reference day provided.
"""
... |
Adjust the number of times an annual offset is applied based on
another date, and the reference day provided
def _adjust_n_years(other, n, month, reference_day):
"""Adjust the number of times an annual offset is applied based on
another date, and the reference day provided"""
if n > 0:
if other... |
Shift the date to a month start or end a given number of months away.
def _shift_month(date, months, day_option='start'):
"""Shift the date to a month start or end a given number of months away.
"""
delta_year = (date.month + months) // 12
month = (date.month + months) % 12
if month == 0:
... |
Possibly increment or decrement the number of periods to shift
based on rollforward/rollbackward conventions.
Parameters
----------
other : cftime.datetime
n : number of periods to increment, before adjusting for rolling
month : int reference month giving the first month of the year
day_opt... |
Convert a frequency string to the appropriate subclass of
BaseCFTimeOffset.
def to_offset(freq):
"""Convert a frequency string to the appropriate subclass of
BaseCFTimeOffset."""
if isinstance(freq, BaseCFTimeOffset):
return freq
else:
try:
freq_data = re.match(_PATTERN,... |
Generate an equally-spaced sequence of cftime.datetime objects between
and including two dates (whose length equals the number of periods).
def _generate_linear_range(start, end, periods):
"""Generate an equally-spaced sequence of cftime.datetime objects between
and including two dates (whose length equals... |
Generate a regular range of cftime.datetime objects with a
given time offset.
Adapted from pandas.tseries.offsets.generate_range.
Parameters
----------
start : cftime.datetime, or None
Start of range
end : cftime.datetime, or None
End of range
periods : int, or None
... |
Return a fixed frequency CFTimeIndex.
Parameters
----------
start : str or cftime.datetime, optional
Left bound for generating dates.
end : str or cftime.datetime, optional
Right bound for generating dates.
periods : integer, optional
Number of periods to generate.
freq ... |
Check if the given date is in the set of possible dates created
using a length-one version of this offset class.
def onOffset(self, date):
"""Check if the given date is in the set of possible dates created
using a length-one version of this offset class."""
mod_month = (date.month - sel... |
Roll date forward to nearest start of quarter
def rollforward(self, date):
"""Roll date forward to nearest start of quarter"""
if self.onOffset(date):
return date
else:
return date + QuarterBegin(month=self.month) |
Roll date backward to nearest start of quarter
def rollback(self, date):
"""Roll date backward to nearest start of quarter"""
if self.onOffset(date):
return date
else:
return date - QuarterBegin(month=self.month) |
Roll date forward to nearest end of quarter
def rollforward(self, date):
"""Roll date forward to nearest end of quarter"""
if self.onOffset(date):
return date
else:
return date + QuarterEnd(month=self.month) |
Roll date backward to nearest end of quarter
def rollback(self, date):
"""Roll date backward to nearest end of quarter"""
if self.onOffset(date):
return date
else:
return date - QuarterEnd(month=self.month) |
Roll date forward to nearest start of year
def rollforward(self, date):
"""Roll date forward to nearest start of year"""
if self.onOffset(date):
return date
else:
return date + YearBegin(month=self.month) |
Roll date backward to nearest start of year
def rollback(self, date):
"""Roll date backward to nearest start of year"""
if self.onOffset(date):
return date
else:
return date - YearBegin(month=self.month) |
Check if the given date is in the set of possible dates created
using a length-one version of this offset class.
def onOffset(self, date):
"""Check if the given date is in the set of possible dates created
using a length-one version of this offset class."""
return date.day == _days_in_m... |
Roll date forward to nearest end of year
def rollforward(self, date):
"""Roll date forward to nearest end of year"""
if self.onOffset(date):
return date
else:
return date + YearEnd(month=self.month) |
Roll date backward to nearest end of year
def rollback(self, date):
"""Roll date backward to nearest end of year"""
if self.onOffset(date):
return date
else:
return date - YearEnd(month=self.month) |
Given an object `x`, call `str(x)` and format the returned string so
that it is numchars long, padding with trailing spaces or truncating with
ellipses as necessary
def pretty_print(x, numchars):
"""Given an object `x`, call `str(x)` and format the returned string so
that it is numchars long, padding w... |
Returns the first n_desired items of an array
def first_n_items(array, n_desired):
"""Returns the first n_desired items of an array"""
# Unfortunately, we can't just do array.flat[:n_desired] here because it
# might not be a numpy.ndarray. Moreover, access to elements of the array
# could be very expen... |
Returns the last n_desired items of an array
def last_n_items(array, n_desired):
"""Returns the last n_desired items of an array"""
# Unfortunately, we can't just do array.flat[-n_desired:] here because it
# might not be a numpy.ndarray. Moreover, access to elements of the array
# could be very expensi... |
Returns the last item of an array in a list or an empty list.
def last_item(array):
"""Returns the last item of an array in a list or an empty list."""
if array.size == 0:
# work around for https://github.com/numpy/numpy/issues/5195
return []
indexer = (slice(-1, None),) * array.ndim
r... |
Cast given object to a Timestamp and return a nicely formatted string
def format_timestamp(t):
"""Cast given object to a Timestamp and return a nicely formatted string"""
# Timestamp is only valid for 1678 to 2262
try:
datetime_str = str(pd.Timestamp(t))
except OutOfBoundsDatetime:
date... |
Cast given object to a Timestamp and return a nicely formatted string
def format_timedelta(t, timedelta_format=None):
"""Cast given object to a Timestamp and return a nicely formatted string"""
timedelta_str = str(pd.Timedelta(t))
try:
days_str, time_str = timedelta_str.split(' days ')
except V... |
Returns a succinct summary of an object as a string
def format_item(x, timedelta_format=None, quote_strings=True):
"""Returns a succinct summary of an object as a string"""
if isinstance(x, (np.datetime64, datetime)):
return format_timestamp(x)
if isinstance(x, (np.timedelta64, timedelta)):
... |
Returns a succinct summaries of all items in a sequence as strings
def format_items(x):
"""Returns a succinct summaries of all items in a sequence as strings"""
x = np.asarray(x)
timedelta_format = 'datetime'
if np.issubdtype(x.dtype, np.timedelta64):
x = np.asarray(x, dtype='timedelta64[ns]')
... |
Return a formatted string for as many items in the flattened version of
array that will fit within max_width characters.
def format_array_flat(array, max_width):
"""Return a formatted string for as many items in the flattened version of
array that will fit within max_width characters.
"""
# every i... |
Summary for __repr__ - use ``X.attrs[key]`` for full value.
def summarize_attr(key, value, col_width=None):
"""Summary for __repr__ - use ``X.attrs[key]`` for full value."""
# Indent key and add ':', then right-pad if col_width is not None
k_str = ' {}:'.format(key)
if col_width is not None:
... |
Get all column items to format, including both keys of `mapping`
and MultiIndex levels if any.
def _get_col_items(mapping):
"""Get all column items to format, including both keys of `mapping`
and MultiIndex levels if any.
"""
from .variable import IndexVariable
col_items = []
for k, v in m... |
Similar to dask.array.DataArray.__repr__, but without
redundant information that's already printed by the repr
function of the xarray wrapper.
def short_dask_repr(array, show_dtype=True):
"""Similar to dask.array.DataArray.__repr__, but without
redundant information that's already printed by the repr
... |
Dispatch function to call appropriate up-sampling methods on
data.
This method should not be called directly; instead, use one of the
wrapper functions supplied by `Resample`.
Parameters
----------
method : str {'asfreq', 'pad', 'ffill', 'backfill', 'bfill', 'nearest',
... |
Apply scipy.interpolate.interp1d along resampling dimension.
def _interpolate(self, kind='linear'):
"""Apply scipy.interpolate.interp1d along resampling dimension."""
# drop any existing non-dimension coordinates along the resampling
# dimension
dummy = self._obj.copy()
for k, v... |
Apply a function over each array in the group and concatenate them
together into a new array.
`func` is called like `func(ar, *args, **kwargs)` for each array `ar`
in this group.
Apply uses heuristics (like `pandas.GroupBy.apply`) to figure out how
to stack together the array. ... |
Apply a function over each Dataset in the groups generated for
resampling and concatenate them together into a new Dataset.
`func` is called like `func(ds, *args, **kwargs)` for each dataset `ds`
in this group.
Apply uses heuristics (like `pandas.GroupBy.apply`) to figure out how
... |
Reduce the items in this group by applying `func` along the
pre-defined resampling dimension.
Parameters
----------
func : function
Function which can be called in the form
`func(x, axis=axis, **kwargs)` to return the result of collapsing
an np.ndarra... |
Default indexes for a Dataset/DataArray.
Parameters
----------
coords : Mapping[Any, xarray.Variable]
Coordinate variables from which to draw default indexes.
dims : iterable
Iterable of dimension names.
Returns
-------
Mapping from indexing keys (levels/dimension names) to ... |
Index a Variable and pandas.Index together.
def isel_variable_and_index(
name: Hashable,
variable: Variable,
index: pd.Index,
indexers: Mapping[Any, Union[slice, Variable]],
) -> Tuple[Variable, Optional[pd.Index]]:
"""Index a Variable and pandas.Index together."""
if not indexers:
# no... |
Build output coordinates for an operation.
Parameters
----------
args : list
List of raw operation arguments. Any valid types for xarray operations
are OK, e.g., scalars, Variable, DataArray, Dataset.
signature : _UfuncSignature
Core dimensions signature for the operation.
e... |
Apply a variable level function over DataArray, Variable and/or ndarray
objects.
def apply_dataarray_vfunc(
func,
*args,
signature,
join='inner',
exclude_dims=frozenset(),
keep_attrs=False
):
"""Apply a variable level function over DataArray, Variable and/or ndarray
objects.
"""... |
Apply a variable level function over dicts of DataArray, DataArray,
Variable and ndarray objects.
def apply_dict_of_variables_vfunc(
func, *args, signature, join='inner', fill_value=None
):
"""Apply a variable level function over dicts of DataArray, DataArray,
Variable and ndarray objects.
"""
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.