text
stringlengths
81
112k
Given an array of numeric dates in netCDF format, convert it into a numpy array of date time objects. For standard (Gregorian) calendars, this function uses vectorized operations, which makes it much faster than cftime.num2date. In such a case, the returned array will be of type np.datetime64. Not...
Given an array of numeric timedeltas in netCDF format, convert it into a numpy timedelta64[ns] array. def decode_cf_timedelta(num_timedeltas, units): """Given an array of numeric timedeltas in netCDF format, convert it into a numpy timedelta64[ns] array. """ num_timedeltas = np.asarray(num_timedelt...
Given an array of datetimes, infer the CF calendar name def infer_calendar_name(dates): """Given an array of datetimes, infer the CF calendar name""" if np.asarray(dates).dtype == 'datetime64[ns]': return 'proleptic_gregorian' else: return np.asarray(dates).ravel()[0].calendar
Given an array of datetimes, returns a CF compatible time-unit string of the form "{time_unit} since {date[0]}", where `time_unit` is 'days', 'hours', 'minutes' or 'seconds' (the first one that can evenly divide all unique time deltas in `dates`) def infer_datetime_units(dates): """Given an array of da...
Converts a cftime.datetime object to a string with the format: YYYY-MM-DD HH:MM:SS.UUUUUU def format_cftime_datetime(date): """Converts a cftime.datetime object to a string with the format: YYYY-MM-DD HH:MM:SS.UUUUUU """ return '{:04d}-{:02d}-{:02d} {:02d}:{:02d}:{:02d}.{:06d}'.format( date...
Given an array of timedeltas, returns a CF compatible time-unit from {'days', 'hours', 'minutes' 'seconds'} (the first one that can evenly divide all unique time deltas in `deltas`) def infer_timedelta_units(deltas): """Given an array of timedeltas, returns a CF compatible time-unit from {'days', 'hour...
Given an array of cftime.datetime objects, return an array of numpy.datetime64 objects of the same size def cftime_to_nptime(times): """Given an array of cftime.datetime objects, return an array of numpy.datetime64 objects of the same size""" times = np.asarray(times) new = np.empty(times.shape, dt...
Fallback method for encoding dates using cftime. This method is more flexible than xarray's parsing using datetime64[ns] arrays but also slower because it loops over each element. def _encode_datetime_with_cftime(dates, units, calendar): """Fallback method for encoding dates using cftime. This method...
Given an array of datetime objects, returns the tuple `(num, units, calendar)` suitable for a CF compliant time variable. Unlike `date2num`, this function can handle datetime64 arrays. See also -------- cftime.date2num def encode_cf_datetime(dates, units=None, calendar=None): """Given an arra...
DataArray.name and Dataset keys must be a string or None def _validate_dataset_names(dataset): """DataArray.name and Dataset keys must be a string or None""" def check_name(name): if isinstance(name, str): if not name: raise ValueError('Invalid name for DataArray or Dataset ...
`attrs` must have a string key and a value which is either: a number, a string, an ndarray or a list/tuple of numbers/strings. def _validate_attrs(dataset): """`attrs` must have a string key and a value which is either: a number, a string, an ndarray or a list/tuple of numbers/strings. """ def chec...
Load and decode a dataset from a file or file-like object. Parameters ---------- filename_or_obj : str, Path, file or xarray.backends.*DataStore Strings and Path objects are interpreted as a path to a netCDF file or an OpenDAP URL and opened with python-netCDF4, unless the filename ...
Open an DataArray from a netCDF file containing a single data variable. This is designed to read netCDF files with only one data variable. If multiple variables are present then a ValueError is raised. Parameters ---------- filename_or_obj : str, Path, file or xarray.backends.*DataStore St...
Open multiple files as a single dataset. Requires dask to be installed. See documentation for details on dask [1]. Attributes from the first dataset file are used for the combined dataset. Parameters ---------- paths : str or sequence Either a string glob in the form "path/to/my/files/*.nc...
This function creates an appropriate datastore for writing a dataset to disk as a netCDF file See `Dataset.to_netcdf` for full API docs. The ``multifile`` argument is only for the private use of save_mfdataset. def to_netcdf(dataset, path_or_file=None, mode='w', format=None, group=None, eng...
Store dataset contents to a backends.*DataStore object. def dump_to_store(dataset, store, writer=None, encoder=None, encoding=None, unlimited_dims=None): """Store dataset contents to a backends.*DataStore object.""" if writer is None: writer = ArrayWriter() if encoding is None: ...
Write multiple datasets to disk as netCDF files simultaneously. This function is intended for use with datasets consisting of dask.array objects, in which case it can write the multiple datasets to disk simultaneously using a shared thread pool. When not using dask, it is no different than calling ``t...
This function creates an appropriate datastore for writing a dataset to a zarr ztore See `Dataset.to_zarr` for full API docs. def to_zarr(dataset, store=None, mode='w-', synchronizer=None, group=None, encoding=None, compute=True, consolidated=False): """This function creates an appropriate dat...
create a new MultiIndex from the current that removing unused levels, meaning that they are not expressed in the labels The resulting MultiIndex will have the same outward appearance, meaning the same .values and ordering. It will also be .equals() to the original. .. versionadded:: 0.20.0 Retur...
Robustly index an array, using retry logic with exponential backoff if any of the errors ``catch`` are raised. The initial_delay is measured in ms. With the default settings, the maximum delay will be in the range of 32-64 seconds. def robust_getitem(array, key, catch=Exception, max_retries=6, ...
This loads the variables and attributes simultaneously. A centralized loading function makes it easier to create data stores that do automatic encoding/decoding. For example:: class SuffixAppendingDataStore(AbstractDataStore): def load(self): va...
Encode the variables and attributes in this store Parameters ---------- variables : dict-like Dictionary of key/value (variable name / xr.Variable) pairs attributes : dict-like Dictionary of key/value (attribute name / attribute) pairs Returns --...
Top level method for putting data on this store, this method: - encodes variables/attributes - sets dimensions - sets variables Parameters ---------- variables : dict-like Dictionary of key/value (variable name / xr.Variable) pairs attributes : ...
This provides a centralized method to set the dataset attributes on the data store. Parameters ---------- attributes : dict-like Dictionary of key/value (attribute name / attribute) pairs def set_attributes(self, attributes): """ This provides a centralized ...
This provides a centralized method to set the variables on the data store. Parameters ---------- variables : dict-like Dictionary of key/value (variable name / xr.Variable) pairs check_encoding_set : list-like List of variables that should be checked for ...
This provides a centralized method to set the dimensions on the data store. Parameters ---------- variables : dict-like Dictionary of key/value (variable name / xr.Variable) pairs unlimited_dims : list-like List of dimension names that should be treated a...
Compute season (DJF, MAM, JJA, SON) from month ordinal def _season_from_months(months): """Compute season (DJF, MAM, JJA, SON) from month ordinal """ # TODO: Move "season" accessor upstream into pandas seasons = np.array(['DJF', 'MAM', 'JJA', 'SON']) months = np.asarray(months) return seasons[(...
Coerce an array of datetime-like values to a CFTimeIndex and access requested datetime component def _access_through_cftimeindex(values, name): """Coerce an array of datetime-like values to a CFTimeIndex and access requested datetime component """ from ..coding.cftimeindex import CFTimeIndex va...
Coerce an array of datetime-like values to a pandas Series and access requested datetime component def _access_through_series(values, name): """Coerce an array of datetime-like values to a pandas Series and access requested datetime component """ values_as_series = pd.Series(values.ravel()) if ...
Indirectly access pandas' libts.get_date_field by wrapping data as a Series and calling through `.dt` attribute. Parameters ---------- values : np.ndarray or dask.array-like Array-like container of datetime-like values name : str Name of datetime field to access dtype : dtype-li...
Coerce an array of datetime-like values to a pandas Series and apply requested rounding def _round_series(values, name, freq): """Coerce an array of datetime-like values to a pandas Series and apply requested rounding """ values_as_series = pd.Series(values.ravel()) method = getattr(values_as_s...
Indirectly access pandas rounding functions by wrapping data as a Series and calling through `.dt` attribute. Parameters ---------- values : np.ndarray or dask.array-like Array-like container of datetime-like values name : str (ceil, floor, round) Name of rounding function freq ...
Fill missing values in this object with data from the other object. Follows normal broadcasting and alignment rules. Parameters ---------- join : {'outer', 'inner', 'left', 'right'}, optional Method for joining the indexes of the passed objects along each dimension - 'outer': us...
Return elements from `self` or `other` depending on `cond`. Parameters ---------- cond : DataArray or Dataset with boolean dtype Locations at which to preserve this objects values. other : scalar, DataArray or Dataset, optional Value to use for locations in this object where ``cond`` is...
Load a dataset from the online repository (requires internet). If a local copy is found then always use that to avoid network traffic. Parameters ---------- name : str Name of the netcdf file containing the dataset ie. 'air_temperature' cache_dir : string, optional The dire...
`load_dataset` will be removed a future version of xarray. The current behavior of this function can be achived by using `tutorial.open_dataset(...).load()`. See Also -------- open_dataset def load_dataset(*args, **kwargs): """ `load_dataset` will be removed a future version of xarray. The...
Make a key for caching files in the LRU cache. def _make_key(self): """Make a key for caching files in the LRU cache.""" value = (self._opener, self._args, 'a' if self._mode == 'w' else self._mode, tuple(sorted(self._kwargs.items()))) return _H...
Acquiring a file object from the manager. A new file is only opened if it has expired from the least-recently-used cache. This method uses a lock, which ensures that it is thread-safe. You can safely acquire a file in multiple threads at the same time, as long as the underlying...
Explicitly close any associated file object (if necessary). def close(self, needs_lock=True): """Explicitly close any associated file object (if necessary).""" # TODO: remove needs_lock if/when we have a reentrant lock in # dask.distributed: https://github.com/dask/dask/issues/3832 with...
Shrink the cache if necessary, evicting the oldest items. def _enforce_size_limit(self, capacity): """Shrink the cache if necessary, evicting the oldest items.""" while len(self._cache) > capacity: key, value = self._cache.popitem(last=False) if self._on_evict is not None: ...
Resize the cache, evicting the oldest items if necessary. def maxsize(self, size): """Resize the cache, evicting the oldest items if necessary.""" if size < 0: raise ValueError('maxsize must be non-negative') with self._lock: self._enforce_size_limit(size) se...
Returns system information as a dict def get_sys_info(): "Returns system information as a dict" blob = [] # get full commit hash commit = None if os.path.isdir(".git") and os.path.isdir("xarray"): try: pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), ...
Given a key for indexing an ndarray, return an equivalent key which is a tuple with length equal to the number of dimensions. The expansion is done by replacing all `Ellipsis` items with the right number of full slices and then padding the key with full slices so that it reaches the appropriate dimensi...
Convert values into a numpy array of at most 1-dimension, while preserving tuples. Adapted from pandas.core.common._asarray_tuplesafe def _asarray_tuplesafe(values): """ Convert values into a numpy array of at most 1-dimension, while preserving tuples. Adapted from pandas.core.common._asarray...
Call pd.Index.get_indexer(labels). def get_indexer_nd(index, labels, method=None, tolerance=None): """ Call pd.Index.get_indexer(labels). """ kwargs = _index_method_kwargs(method, tolerance) flat_labels = np.ravel(labels) flat_indexer = index.get_indexer(flat_labels, **kwargs) indexer = flat_index...
Given a pandas.Index and labels (e.g., from __getitem__) for one dimension, return an indexer suitable for indexing an ndarray along that dimension. If `index` is a pandas.MultiIndex and depending on `label`, return a new pandas.Index or pandas.MultiIndex (otherwise return None). def convert_label_indexer(...
Given a xarray data object and label based indexers, return a mapping of label indexers with only dimension names as keys. It groups multiple level indexers given on a multi-index dimension into a single, dictionary indexer for that dimension (Raise a ValueError if it is not possible). def get_dim_ind...
Given an xarray data object and label based indexers, return a mapping of equivalent location based indexers. Also return a mapping of updated pandas index objects (in case of multi-index level drop). def remap_label_indexers(data_obj, indexers, method=None, tolerance=None): """Given an xarray data object ...
Given a slice and the size of the dimension to which it will be applied, index it with another slice to return a new slice equivalent to applying the slices sequentially def slice_slice(old_slice, applied_slice, size): """Given a slice and the size of the dimension to which it will be applied, index it...
This function always returns a ExplicitlyIndexed subclass, so that the vectorized indexing is always possible with the returned object. def as_indexable(array): """ This function always returns a ExplicitlyIndexed subclass, so that the vectorized indexing is always possible with the returned ob...
Convert an OuterIndexer into an vectorized indexer. Parameters ---------- key : Outer/Basic Indexer An indexer to convert. shape : tuple Shape of the array subject to the indexing. Returns ------- VectorizedIndexer Tuple suitable for use to index a NumPy array with ...
Convert an OuterIndexer into an indexer for NumPy. Parameters ---------- key : Basic/OuterIndexer An indexer to convert. shape : tuple Shape of the array subject to the indexing. Returns ------- tuple Tuple suitable for use to index a NumPy array. def _outer_to_num...
Combine two indexers. Parameters ---------- old_key: ExplicitIndexer The first indexer for the original array shape: tuple of ints Shape of the original array to be indexed by old_key new_key: The second indexer for indexing original[old_key] def _combine_indexers(old_key, ...
Support explicit indexing by delegating to a raw indexing method. Outer and/or vectorized indexers are supported by indexing a second time with a NumPy array. Parameters ---------- key : ExplicitIndexer Explicit indexing object. shape : Tuple[int, ...] Shape of the indexed arra...
convert a slice to successive two slices. The first slice always has a positive step. def _decompose_slice(key, size): """ convert a slice to successive two slices. The first slice always has a positive step. """ start, stop, step = key.indices(size) if step > 0: # If key already has a ...
Decompose vectorized indexer to the successive two indexers, where the first indexer will be used to index backend arrays, while the second one is used to index loaded on-memory np.ndarray. Parameters ---------- indexer: VectorizedIndexer indexing_support: one of IndexerSupport entries Ret...
Decompose outer indexer to the successive two indexers, where the first indexer will be used to index backend arrays, while the second one is used to index the loaded on-memory np.ndarray. Parameters ---------- indexer: VectorizedIndexer indexing_support: One of the entries of IndexingSupport ...
Return an identical vindex but slices are replaced by arrays def _arrayize_vectorized_indexer(indexer, shape): """ Return an identical vindex but slices are replaced by arrays """ slices = [v for v in indexer.tuple if isinstance(v, slice)] if len(slices) == 0: return indexer arrays = [v for v ...
Create a dask array using the chunks hint for dimensions of size > 1. def _dask_array_with_chunks_hint(array, chunks): """Create a dask array using the chunks hint for dimensions of size > 1.""" import dask.array as da if len(chunks) < array.ndim: raise ValueError('not enough chunks in hint') n...
Create a mask for indexing with a fill-value. Parameters ---------- indexer : ExplicitIndexer Indexer with -1 in integer or ndarray value to indicate locations in the result that should be masked. shape : tuple Shape of the array being indexed. chunks_hint : tuple, optional ...
Convert masked indices in a flat array to the nearest unmasked index. Parameters ---------- index : np.ndarray One dimensional ndarray with dtype=int. Returns ------- np.ndarray One dimensional ndarray with all values equal to -1 replaced by an adjacent non-masked eleme...
Convert masked values (-1) in an indexer to nearest unmasked values. This routine is useful for dask, where it can be much faster to index adjacent points than arbitrary points from the end of an array. Parameters ---------- indexer : ExplicitIndexer Input indexer. Returns -------...
import seaborn and handle deprecation of apionly module def import_seaborn(): '''import seaborn and handle deprecation of apionly module''' with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") try: import seaborn.apionly as sns if (w and issubc...
Build a discrete colormap and normalization of the data. def _build_discrete_cmap(cmap, levels, extend, filled): """ Build a discrete colormap and normalization of the data. """ import matplotlib as mpl if not filled: # non-filled contour plots extend = 'max' if extend == 'bot...
Use some heuristics to set good defaults for colorbar and range. Parameters ========== plot_data: Numpy array Doesn't handle xarray objects Returns ======= cmap_params : dict Use depends on the type of the plotting function def _determine_cmap_params(plot_data, vmin=None, vmax...
Determine x and y labels for showing RGB images. Attempts to infer which dimension is RGB/RGBA by size and order of dims. def _infer_xy_labels_3d(darray, x, y, rgb): """ Determine x and y labels for showing RGB images. Attempts to infer which dimension is RGB/RGBA by size and order of dims. """ ...
Determine x and y labels. For use in _plot2d darray must be a 2 dimensional data array, or 3d for imshow only. def _infer_xy_labels(darray, x, y, imshow=False, rgb=None): """ Determine x and y labels. For use in _plot2d darray must be a 2 dimensional data array, or 3d for imshow only. """ ass...
Makes informative labels if variable metadata (attrs) follows CF conventions. def label_from_attrs(da, extra=''): ''' Makes informative labels if variable metadata (attrs) follows CF conventions. ''' if da.attrs.get('long_name'): name = da.attrs['long_name'] elif da.attrs.get('stan...
Helper function which returns an array with the Intervals' boundaries. def _interval_to_bound_points(array): """ Helper function which returns an array with the Intervals' boundaries. """ array_boundaries = np.array([x.left for x in array]) array_boundaries = np.concatenate( (array...
Helper function to deal with a xarray consisting of pd.Intervals. Each interval is replaced with both boundaries. I.e. the length of xarray doubles. yarray is modified so it matches the new shape of xarray. def _interval_to_double_bound_points(xarray, yarray): """ Helper function to deal with a xarray ...
Helper function to replace the values of a coordinate array containing pd.Interval with their mid-points or - for pcolormesh - boundaries which increases length by 1. def _resolve_intervals_2dplot(val, func_name): """ Helper function to replace the values of a coordinate array containing pd.Interva...
Do all elements of x have a type from types? def _valid_other_type(x, types): """ Do all elements of x have a type from types? """ return all(any(isinstance(el, t) for t in types) for el in np.ravel(x))
Is any dtype from numpy_types superior to the dtype of x? def _valid_numpy_subdtype(x, numpy_types): """ Is any dtype from numpy_types superior to the dtype of x? """ # If any of the types given in numpy_types is understood as numpy.generic, # all possible x will be considered valid. This is proba...
Raise exception if there is anything in args that can't be plotted on an axis by matplotlib. def _ensure_plottable(*args): """ Raise exception if there is anything in args that can't be plotted on an axis by matplotlib. """ numpy_types = [np.floating, np.integer, np.timedelta64, np.datetime64] ...
Update axes with provided parameters def _update_axes(ax, xincrease, yincrease, xscale=None, yscale=None, xticks=None, yticks=None, xlim=None, ylim=None): """ Update axes with provided parameters """ if xincrease is None: pass elif xincreas...
>>> _is_monotonic(np.array([0, 1, 2])) True >>> _is_monotonic(np.array([2, 1, 0])) True >>> _is_monotonic(np.array([0, 2, 1])) False def _is_monotonic(coord, axis=0): """ >>> _is_monotonic(np.array([0, 1, 2])) True >>> _is_monotonic(np.array([2, 1, 0])) True >>> _is_monotoni...
>>> _infer_interval_breaks(np.arange(5)) array([-0.5, 0.5, 1.5, 2.5, 3.5, 4.5]) >>> _infer_interval_breaks([[0, 1], [3, 4]], axis=1) array([[-0.5, 0.5, 1.5], [ 2.5, 3.5, 4.5]]) def _infer_interval_breaks(coord, axis=0, check_monotonic=False): """ >>> _infer_interval_breaks(np.ar...
Parameters ========== func : plotting function kwargs : dict, Dictionary with arguments that need to be parsed data : ndarray, Data values Returns ======= cmap_params cbar_kwargs def _process_cmap_cbar_kwargs(func, kwargs, data): """ Parameters ========== ...
Default plot of DataArray using matplotlib.pyplot. Calls xarray plotting function based on the dimensions of darray.squeeze() =============== =========================== Dimensions Plotting function --------------- --------------------------- 1 :py:func:`xarray.plot.line` ...
Line plot of DataArray index against values Wraps :func:`matplotlib:matplotlib.pyplot.plot` Parameters ---------- darray : DataArray Must be 1 dimensional figsize : tuple, optional A tuple (width, height) of the figure in inches. Mutually exclusive with ``size`` and ``ax``....
Step plot of DataArray index against values Similar to :func:`matplotlib:matplotlib.pyplot.step` Parameters ---------- where : {'pre', 'post', 'mid'}, optional, default 'pre' Define where the steps should be placed: - 'pre': The y value is continued constantly to the left from ...
Histogram of DataArray Wraps :func:`matplotlib:matplotlib.pyplot.hist` Plots N dimensional arrays by first flattening the array. Parameters ---------- darray : DataArray Can be any dimension figsize : tuple, optional A tuple (width, height) of the figure in inches. Mut...
Decorator for common 2d plotting logic Also adds the 2d plot method to class _PlotMethods def _plot2d(plotfunc): """ Decorator for common 2d plotting logic Also adds the 2d plot method to class _PlotMethods """ commondoc = """ Parameters ---------- darray : DataArray Must ...
Image plot of 2d DataArray using matplotlib.pyplot Wraps :func:`matplotlib:matplotlib.pyplot.imshow` While other plot methods require the DataArray to be strictly two-dimensional, ``imshow`` also accepts a 3D array where some dimension can be interpreted as RGB or RGBA color channels and allows th...
Contour plot of 2d DataArray Wraps :func:`matplotlib:matplotlib.pyplot.contour` def contour(x, y, z, ax, **kwargs): """ Contour plot of 2d DataArray Wraps :func:`matplotlib:matplotlib.pyplot.contour` """ primitive = ax.contour(x, y, z, **kwargs) return primitive
Filled contour plot of 2d DataArray Wraps :func:`matplotlib:matplotlib.pyplot.contourf` def contourf(x, y, z, ax, **kwargs): """ Filled contour plot of 2d DataArray Wraps :func:`matplotlib:matplotlib.pyplot.contourf` """ primitive = ax.contourf(x, y, z, **kwargs) return primitive
Pseudocolor plot of 2d DataArray Wraps :func:`matplotlib:matplotlib.pyplot.pcolormesh` def pcolormesh(x, y, z, ax, infer_intervals=None, **kwargs): """ Pseudocolor plot of 2d DataArray Wraps :func:`matplotlib:matplotlib.pyplot.pcolormesh` """ # decide on a default for infer_intervals (GH781)...
Determine the dask scheduler that is being used. None is returned if no dask scheduler is active. See also -------- dask.base.get_scheduler def _get_scheduler(get=None, collection=None): """Determine the dask scheduler that is being used. None is returned if no dask scheduler is active. ...
Acquire a lock, possibly in a non-blocking fashion. Includes backwards compatibility hacks for old versions of Python, dask and dask-distributed. def acquire(lock, blocking=True): """Acquire a lock, possibly in a non-blocking fashion. Includes backwards compatibility hacks for old versions of Python,...
Combine a sequence of locks into a single lock. def combine_locks(locks): """Combine a sequence of locks into a single lock.""" all_locks = [] for lock in locks: if isinstance(lock, CombinedLock): all_locks.extend(lock.locks) elif lock is not None: all_locks.append(l...
Get a list of dimensions to squeeze out. def get_squeeze_dims(xarray_obj, dim: Union[Hashable, Iterable[Hashable], None] = None, axis: Union[int, Iterable[int], None] = None ) -> List[Hashable]: """Get a list of dimensions to squeeze out. """ i...
Return a new object with the same shape and type as a given object. Parameters ---------- other : DataArray, Dataset, or Variable The reference object in input fill_value : scalar Value to fill the new object with before returning it. dtype : dtype, optional dtype of the new...
Inner function of full_like, where other must be a variable def _full_like_variable(other, fill_value, dtype: Union[str, np.dtype, None] = None): """Inner function of full_like, where other must be a variable """ from .variable import Variable if isinstance(other.data, dask_arr...
Shorthand for full_like(other, 0, dtype) def zeros_like(other, dtype: Union[str, np.dtype, None] = None): """Shorthand for full_like(other, 0, dtype) """ return full_like(other, 0, dtype)
Shorthand for full_like(other, 1, dtype) def ones_like(other, dtype: Union[str, np.dtype, None] = None): """Shorthand for full_like(other, 1, dtype) """ return full_like(other, 1, dtype)
Check if a dtype is a subclass of the numpy datetime types def is_np_datetime_like(dtype: Union[str, np.dtype]) -> bool: """Check if a dtype is a subclass of the numpy datetime types """ return (np.issubdtype(dtype, np.datetime64) or np.issubdtype(dtype, np.timedelta64))
Check if an array contains cftime.datetime objects def _contains_cftime_datetimes(array) -> bool: """Check if an array contains cftime.datetime objects """ try: from cftime import datetime as cftime_datetime except ImportError: return False else: if array.dtype == np.dtype('...
Return axis number(s) corresponding to dimension(s) in this array. Parameters ---------- dim : str or iterable of str Dimension name(s) for which to lookup axes. Returns ------- int or tuple of int Axis number or numbers corresponding to the give...
Ordered mapping from dimension names to lengths. Immutable. See also -------- Dataset.sizes def sizes(self: Any) -> Mapping[Hashable, int]: """Ordered mapping from dimension names to lengths. Immutable. See also -------- Dataset.sizes ...
Provide method for the key-autocompletions in IPython. See http://ipython.readthedocs.io/en/stable/config/integrating.html#tab-completion For the details. def _ipython_key_completions_(self) -> List[str]: """Provide method for the key-autocompletions in IPython. See http://ipython.readt...