text
stringlengths
81
112k
Creates a dictionary representing the state of the risk report. Parameters ---------- start_session : pd.Timestamp Start of period (inclusive) to produce metrics on end_session : pd.Timestamp End of period (inclusive) to produce metrics on algorithm_retur...
For the given root symbol, find the contract that is considered active on a specific date at a specific offset. def _get_active_contract_at_offset(self, root_symbol, dt, offset): """ For the given root symbol, find the contract that is considered active on a specific date at a specific ...
Parameters ---------- root_symbol : str The root symbol for the contract chain. dt : Timestamp The datetime for which to retrieve the current contract. offset : int The offset from the primary contract. 0 is the primary, 1 is the secondary,...
Get the rolls, i.e. the session at which to hop from contract to contract in the chain. Parameters ---------- root_symbol : str The root symbol for which to calculate rolls. start : Timestamp Start of the date range. end : Timestamp En...
r""" Return the active contract based on the previous trading day's volume. In the rare case that a double volume switch occurs we treat the first switch as the roll. Take the following case for example: | +++++ _____ | + __ / <--- 'G' | ...
Parameters ---------- root_symbol : str The root symbol for the contract chain. dt : Timestamp The datetime for which to retrieve the current contract. offset : int The offset from the primary contract. 0 is the primary, 1 is the secondary,...
Coerce buffer data for an AdjustedArray into a standard scalar representation, returning the coerced array and a dict of argument to pass to np.view to use when providing a user-facing view of the underlying data. - float* data is coerced to float64 with viewtype float64. - int32, int64, and uint32 are...
Merge lists of new and existing adjustments for a given index by appending or prepending new adjustments to existing adjustments. Notes ----- This method is meant to be used with ``toolz.merge_with`` to merge adjustment mappings. In case of a collision ``adjustment_lists`` contains two lists, e...
Return the input as a numpy ndarray. This is a no-op if the input is already an ndarray. If the input is an adjusted_array, this extracts a read-only view of its internal data buffer. Parameters ---------- ndarray_or_adjusted_array : numpy.ndarray | zipline.data.adjusted_array Returns --...
Check that a window of length `window_length` is well-defined on `data`. Parameters ---------- data : np.ndarray[ndim=2] The array of data to check. window_length : int Length of the desired window. Returns ------- None Raises ------ WindowLengthNotPositive ...
Merge ``adjustments`` with existing adjustments, handling index collisions according to ``method``. Parameters ---------- adjustments : dict[int -> list[Adjustment]] The mapping of row indices to lists of adjustments that should be appended to existing adjustment...
The iterator produced when `traverse` is called on this Array. def _iterator_type(self): """ The iterator produced when `traverse` is called on this Array. """ if isinstance(self._data, LabelArray): return LabelWindow return CONCRETE_WINDOW_TYPES[self._data.dtype]
Produce an iterator rolling windows rows over our data. Each emitted window will have `window_length` rows. Parameters ---------- window_length : int The number of rows in each emitted window. offset : int, optional Number of rows to skip before the first...
Return a string representation of the data stored in this array. def inspect(self): """ Return a string representation of the data stored in this array. """ return dedent( """\ Adjusted Array ({dtype}): Data: {data!r} Adjustm...
Map a function over baseline and adjustment values in place. Note that the baseline data values must be a LabelArray. def update_labels(self, func): """ Map a function over baseline and adjustment values in place. Note that the baseline data values must be a LabelArray. """ ...
Handle a TradingControlViolation, either by raising or logging and error with information about the failure. If dynamic information should be displayed as well, pass it in via `metadata`. def handle_violation(self, asset, amount, datetime, metadata=None): """ Handle a TradingCo...
Fail if we've already placed self.max_count orders today. def validate(self, asset, amount, portfolio, algo_datetime, algo_current_data): """ Fail if we've already placed self.max_count orders today. """ ...
Fail if the asset is in the restricted_list. def validate(self, asset, amount, portfolio, algo_datetime, algo_current_data): """ Fail if the asset is in the restricted_list. """ if self.restrictions.is_...
Fail if the magnitude of the given order exceeds either self.max_shares or self.max_notional. def validate(self, asset, amount, portfolio, algo_datetime, algo_current_data): """ Fail if the magnitude of the giv...
Fail if the given order would cause the magnitude of our position to be greater in shares than self.max_shares or greater in dollar value than self.max_notional. def validate(self, asset, amount, portfolio, algo_datetime, ...
Fail if we would hold negative shares of asset after completing this order. def validate(self, asset, amount, portfolio, algo_datetime, algo_current_data): """ Fail if we would hold negative shares of asset aft...
Fail if the algo has passed this Asset's end_date, or before the Asset's start date. def validate(self, asset, amount, portfolio, algo_datetime, algo_current_data): """ Fail if the algo has passed this Asset's ...
Fail if the leverage is greater than the allowed leverage. def validate(self, _portfolio, _account, _algo_datetime, _algo_current_data): """ Fail if the leverage is greater than the allowed leverage. """ if _account.lev...
Make validation checks if we are after the deadline. Fail if the leverage is less than the min leverage. def validate(self, _portfolio, account, algo_datetime, _algo_current_data): """ Make validation checks if we are after the...
Alter columns from a table. Parameters ---------- name : str The name of the table. *columns The new columns to have. selection_string : str, optional The string to use in the selection. If not provided, it will select all of the new columns from the old table. ...
Downgrades the assets db at the given engine to the desired version. Parameters ---------- engine : Engine An SQLAlchemy engine to the assets database. desired_version : int The desired resulting version for the assets database. def downgrade(engine, desired_version): """Downgrades...
Decorator for marking that a method is a downgrade to a version to the previous version. Parameters ---------- src : int The version this downgrades from. Returns ------- decorator : callable[(callable) -> callable] The decorator to apply. def downgrades(src): """Decor...
Downgrade assets db by removing the 'tick_size' column and renaming the 'multiplier' column. def _downgrade_v1(op): """ Downgrade assets db by removing the 'tick_size' column and renaming the 'multiplier' column. """ # Drop indices before batch # This is to prevent index collision when crea...
Downgrade assets db by removing the 'auto_close_date' column. def _downgrade_v2(op): """ Downgrade assets db by removing the 'auto_close_date' column. """ # Drop indices before batch # This is to prevent index collision when creating the temp table op.drop_index('ix_equities_fuzzy_symbol') ...
Downgrade assets db by adding a not null constraint on ``equities.first_traded`` def _downgrade_v3(op): """ Downgrade assets db by adding a not null constraint on ``equities.first_traded`` """ op.create_table( '_new_equities', sa.Column( 'sid', sa.Integer...
Downgrades assets db by copying the `exchange_full` column to `exchange`, then dropping the `exchange_full` column. def _downgrade_v4(op): """ Downgrades assets db by copying the `exchange_full` column to `exchange`, then dropping the `exchange_full` column. """ op.drop_index('ix_equities_fuzzy...
Create a family of metrics sets functions that read from the same metrics set mapping. Returns ------- metrics_sets : mappingproxy The mapping of metrics sets to load functions. register : callable The function which registers new metrics sets in the ``metrics_sets`` mapping...
Verify that the columns of ``events`` can be used by a EarningsEstimatesLoader to serve the BoundColumns described by `columns`. def validate_column_specs(events, columns): """ Verify that the columns of ``events`` can be used by a EarningsEstimatesLoader to serve the BoundColumns described by ...
Selects the requested data for each date. Parameters ---------- zero_qtr_data : pd.DataFrame The 'time zero' data for each calendar date per sid. zeroth_quarter_idx : pd.Index An index of calendar dates, sid, and normalized quarters, for only the rows...
Compute the index in `dates` where the split-adjusted-asof-date falls. This is the date up to which, and including which, we will need to unapply all adjustments for and then re-apply them as they come in. After this date, adjustments are applied as normal. Parameters ----------...
Given a sid, collect all overwrites that should be applied for this sid at each quarter boundary. Parameters ---------- group : pd.DataFrame The data for `sid`. dates : pd.DatetimeIndex The calendar dates for which estimates data is requested. req...
Merge adjustments for a particular sid into a dictionary containing adjustments for all sids. Parameters ---------- all_adjustments_for_sid : dict[int -> AdjustedArray] All adjustments for a particular sid. col_to_all_adjustments : dict[int -> AdjustedArray] ...
Creates an AdjustedArray from the given estimates data for the given dates. Parameters ---------- zero_qtr_data : pd.DataFrame The 'time zero' data for each calendar date per sid. requested_qtr_data : pd.DataFrame The requested quarter data for each calen...
Add entries to the dictionary of columns to adjustments for the given sid and the given quarter. Parameters ---------- col_to_overwrites : dict [column_name -> list of ArrayAdjustment] A dictionary mapping column names to all overwrites for those columns. ...
Determine the last piece of information we know for each column on each date in the index for each sid and quarter. Parameters ---------- assets_with_data : pd.Index Index of all assets that appear in the raw data given to the loader. columns : iterable o...
Filters for releases that are on or after each simulation date and determines the previous quarter by picking out the most recent release relative to each date in the index. Parameters ---------- stacked_last_per_qtr : pd.DataFrame A DataFrame with index of calendar ...
Collects both overwrites and adjustments for a particular sid. Parameters ---------- split_adjusted_asof_idx : int The integer index of the date on which the data was split-adjusted. split_adjusted_cols_for_group : list of str The names of requested columns that ...
Calculates both split adjustments and overwrites for all sids. def get_adjustments(self, zero_qtr_data, requested_qtr_data, last_per_qtr, dates, assets, columns, ...
Determines the date until which the adjustment at the given date index should be applied for the given quarter. Parameters ---------- adjustment_ts : pd.Timestamp The timestamp at which the adjustment occurs. dates : pd.DatetimeIndex The calendar dates ov...
Collect split adjustments that occur before the split-adjusted-asof-date. All those adjustments must first be UN-applied at the first date index and then re-applied on the appropriate dates in order to match point in time share pricing data. Parameters ---------- split_a...
Collect split adjustments that occur after the split-adjusted-asof-date. Each adjustment needs to be applied to all dates on which knowledge for the requested quarter was older than the date of the adjustment. Parameters ---------- post_adjustments : tuple(list(float), l...
dates : pd.DatetimeIndex The calendar dates. sid : int The sid for which we want to retrieve adjustments. split_adjusted_asof_idx : int The index in `dates` as-of which the data is split adjusted. Returns ------- pre_adjustments : tuple(list(f...
Merge split adjustments with the dict containing overwrites. Parameters ---------- pre : dict[str -> dict[int -> list]] The adjustments that occur before the split-adjusted-asof-date. post : dict[str -> dict[int -> list]] The adjustments that occur after the spli...
Collect split adjustments for previous quarters and apply them to the given dictionary of splits for the given sid. Since overwrites just replace all estimates before the new quarter with NaN, we don't need to worry about re-applying split adjustments. Parameters ---------- ...
Collect split adjustments for future quarters. Re-apply adjustments that would be overwritten by overwrites. Merge split adjustments with overwrites into the given dictionary of splits for the given sid. Parameters ---------- adjustments_for_sid : dict[str -> dict[int -> list]] ...
Convenience constructor for passing `decay_rate` in terms of `span`. Forwards `decay_rate` as `1 - (2.0 / (1 + span))`. This provides the behavior equivalent to passing `span` to pandas.ewma. Examples -------- .. code-block:: python # Equivalent to: # ...
Convenience constructor for passing ``decay_rate`` in terms of half life. Forwards ``decay_rate`` as ``exp(log(.5) / halflife)``. This provides the behavior equivalent to passing `halflife` to pandas.ewma. Examples -------- .. code-block:: python # Equival...
Convenience constructor for passing `decay_rate` in terms of center of mass. Forwards `decay_rate` as `1 - (1 / 1 + center_of_mass)`. This provides behavior equivalent to passing `center_of_mass` to pandas.ewma. Examples -------- .. code-block:: python # E...
Check if a and b are equal with some tolerance. Parameters ---------- a, b : float The floats to check for equality. atol : float, optional The absolute tolerance. rtol : float, optional The relative tolerance. equal_nan : bool, optional Should NaN compare equal?...
Round a to the nearest integer if that integer is within an epsilon of a. def round_if_near_integer(a, epsilon=1e-4): """ Round a to the nearest integer if that integer is within an epsilon of a. """ if abs(a - round(a)) <= epsilon: return round(a) else: return a
A decorator for methods whose signature is f(self, other) that coerces ``other`` to ``self.dtype``. This is used to make comparison operations between numbers and `Factor` instances work independently of whether the user supplies a float or integer literal. For example, if I write:: my_fi...
Compute the expected return dtype for the given binary operator. Parameters ---------- op : str Operator symbol, (e.g. '+', '-', ...). left : numpy.dtype Dtype of left hand side. right : numpy.dtype Dtype of right hand side. Returns ------- outdtype : numpy.dtyp...
Factory function for making binary operator methods on a Factor subclass. Returns a function, "binary_operator" suitable for implementing functions like __add__. def binary_operator(op): """ Factory function for making binary operator methods on a Factor subclass. Returns a function, "binary_oper...
Factory function for making binary operator methods on a Factor. Returns a function, "reflected_binary_operator" suitable for implementing functions like __radd__. def reflected_binary_operator(op): """ Factory function for making binary operator methods on a Factor. Returns a function, "reflecte...
Factory function for making unary operator methods for Factors. def unary_operator(op): """ Factory function for making unary operator methods for Factors. """ # Only negate is currently supported. valid_ops = {'-'} if op not in valid_ops: raise ValueError("Invalid unary operator %s." %...
Factory function for producing function application methods for Factor subclasses. def function_application(func): """ Factory function for producing function application methods for Factor subclasses. """ if func not in NUMEXPR_MATH_FUNCS: raise ValueError("Unsupported mathematical fun...
This implementation is based on scipy.stats.mstats.winsorize def winsorize(row, min_percentile, max_percentile): """ This implementation is based on scipy.stats.mstats.winsorize """ a = row.copy() nan_count = isnan(row).sum() nonnan_count = a.size - nan_count # NOTE: argsort() sorts nans t...
Construct a Factor that computes ``self`` and subtracts the mean from row of the result. If ``mask`` is supplied, ignore values where ``mask`` returns False when computing row means, and output NaN anywhere the mask is False. If ``groupby`` is supplied, compute by partitioning each row...
Construct a Factor that Z-Scores each day's results. The Z-Score of a row is defined as:: (row - row.mean()) / row.stddev() If ``mask`` is supplied, ignore values where ``mask`` returns False when computing row means and standard deviations, and output NaN anywhere the mas...
Construct a new Factor representing the sorted rank of each column within each row. Parameters ---------- method : str, {'ordinal', 'min', 'max', 'dense', 'average'} The method used to assign ranks to tied elements. See `scipy.stats.rankdata` for a full descripti...
Construct a new Factor that computes rolling pearson correlation coefficients between `target` and the columns of `self`. This method can only be called on factors which are deemed safe for use as inputs to other factors. This includes `Returns` and any factors created from `Factor.rank...
Construct a new Factor that computes rolling spearman rank correlation coefficients between `target` and the columns of `self`. This method can only be called on factors which are deemed safe for use as inputs to other factors. This includes `Returns` and any factors created from `Facto...
Construct a new Factor that performs an ordinary least-squares regression predicting the columns of `self` from `target`. This method can only be called on factors which are deemed safe for use as inputs to other factors. This includes `Returns` and any factors created from `Factor.rank...
Construct a new factor that winsorizes the result of this factor. Winsorizing changes values ranked less than the minimum percentile to the value at the minimum percentile. Similarly, values ranking above the maximum percentile are changed to the value at the maximum percentile. ...
Construct a Classifier computing quantiles of the output of ``self``. Every non-NaN data point the output is labelled with an integer value from 0 to (bins - 1). NaNs are labelled with -1. If ``mask`` is supplied, ignore data points in locations for which ``mask`` produces False, and ...
Construct a Filter matching the top N asset values of self each day. If ``groupby`` is supplied, returns a Filter matching the top N asset values for each group. Parameters ---------- N : int Number of assets passing the returned filter each day. mask : zipl...
Construct a Filter matching the bottom N asset values of self each day. If ``groupby`` is supplied, returns a Filter matching the bottom N asset values for each group. Parameters ---------- N : int Number of assets passing the returned filter each day. mask ...
Construct a new Filter representing entries from the output of this Factor that fall within the percentile range defined by min_percentile and max_percentile. Parameters ---------- min_percentile : float [0.0, 100.0] Return True for assets falling above this percenti...
Verify that the stored rank method is valid. def _validate(self): """ Verify that the stored rank method is valid. """ if self._method not in _RANK_METHODS: raise UnknownRankMethod( method=self._method, choices=set(_RANK_METHODS), ...
For each row in the input, compute a like-shaped array of per-row ranks. def _compute(self, arrays, dates, assets, mask): """ For each row in the input, compute a like-shaped array of per-row ranks. """ return masked_rankdata_2d( arrays[0], mask, ...
Convert a time into microseconds since midnight. Parameters ---------- time : datetime.time The time to convert. Returns ------- us : int The number of microseconds since midnight. Notes ----- This does not account for leap seconds or daylight savings. def _time_to_m...
Return a mask of all of the datetimes in ``dts`` that are between ``start`` and ``end``. Parameters ---------- dts : pd.DatetimeIndex The index to mask. start : time Mask away times less than the start. end : time Mask away times greater than the end. include_start : ...
Find the index of ``dt`` in ``dts``. This function should be used instead of `dts.get_loc(dt)` if the index is large enough that we don't want to initialize a hash table in ``dts``. In particular, this should always be used on minutely trading calendars. Parameters ---------- dts : pd.Datetime...
Find values in ``dts`` closest but not equal to ``dt``. Returns a pair of (last_before, first_after). When ``dt`` is less than any element in ``dts``, ``last_before`` is None. When ``dt`` is greater any element in ``dts``, ``first_after`` is None. ``dts`` must be unique and sorted in increasing order...
Prepare list of pandas DataFrames to be used as input to pd.concat. Ensure any columns of type 'category' have the same categories across each dataframe. Parameters ---------- df_list : list List of dataframes with same columns. inplace : bool True if input list can be modified....
Check that a list of Index objects are all equal. Parameters ---------- indexes : iterable[pd.Index] Iterable of indexes to check. Raises ------ ValueError If the indexes are not all the same. def check_indexes_all_same(indexes, message="Indexes are not equal."): """Check ...
Compute the set of resource columns required to serve ``next_value_columns`` and ``previous_value_columns``. def required_event_fields(next_value_columns, previous_value_columns): """ Compute the set of resource columns required to serve ``next_value_columns`` and ``previous_value_columns``. """ ...
Verify that the columns of ``events`` can be used by an EventsLoader to serve the BoundColumns described by ``next_value_columns`` and ``previous_value_columns``. def validate_column_specs(events, next_value_columns, previous_value_columns): """ Verify that the columns of ``events`` can be used by an E...
Split requested columns into columns that should load the next known value and columns that should load the previous known value. Parameters ---------- requested_columns : iterable[BoundColumn] Returns ------- next_cols, previous_cols : iterable[BoundColumn], it...
Eq check with a short-circuit for identical objects. def compare_arrays(left, right): "Eq check with a short-circuit for identical objects." return ( left is right or ((left.shape == right.shape) and (left == right).all()) )
Rehydrate a LabelArray from the codes and metadata. Parameters ---------- codes : np.ndarray[integral] The codes for the label array. categories : np.ndarray[object] The unique string categories. reverse_categories : dict[str, int] The mapping...
Convert self into a regular ndarray of ints. This is an O(1) operation. It does not copy the underlying data. def as_int_array(self): """ Convert self into a regular ndarray of ints. This is an O(1) operation. It does not copy the underlying data. """ return self.view(...
Coerce self into a pandas categorical. This is only defined on 1D arrays, since that's all pandas supports. def as_categorical(self): """ Coerce self into a pandas categorical. This is only defined on 1D arrays, since that's all pandas supports. """ if len(self.shape) ...
Coerce self into a pandas DataFrame of Categoricals. def as_categorical_frame(self, index, columns, name=None): """ Coerce self into a pandas DataFrame of Categoricals. """ if len(self.shape) != 2: raise ValueError( "Can't convert a non-2D LabelArray into a D...
Set scalar value into the array. Parameters ---------- indexer : any The indexer to set the value at. value : str The value to assign at the given locations. Raises ------ ValueError Raised when ``value`` is not a value elemen...
Shared code for __eq__ and __ne__, parameterized on the actual comparison operator to use. def _equality_check(op): """ Shared code for __eq__ and __ne__, parameterized on the actual comparison operator to use. """ def method(self, other): if isinstance(othe...
Make an empty LabelArray with the same categories as ``self``, filled with ``self.missing_value``. def empty_like(self, shape): """ Make an empty LabelArray with the same categories as ``self``, filled with ``self.missing_value``. """ return type(self).from_codes_and_met...
Map a function from str -> bool element-wise over ``self``. ``f`` will be applied exactly once to each non-missing unique value in ``self``. Missing values will always return False. def map_predicate(self, f): """ Map a function from str -> bool element-wise over ``self``. ``f...
Map a function from str -> str element-wise over ``self``. ``f`` will be applied exactly once to each non-missing unique value in ``self``. Missing values will always map to ``self.missing_value``. def map(self, f): """ Map a function from str -> str element-wise over ``self``. ...
Asymmetric rounding function for adjusting prices to the specified number of places in a way that "improves" the price. For limit prices, this means preferring to round down on buys and preferring to round up on sells. For stop prices, it means the reverse. If prefer_round_down == True: When .0...
Check to make sure the stop/limit prices are reasonable and raise a BadOrderParameters exception if not. def check_stoplimit_prices(price, label): """ Check to make sure the stop/limit prices are reasonable and raise a BadOrderParameters exception if not. """ try: if not isfinite(price)...
Build a zipline data bundle from the directory with csv files. def csvdir_bundle(environ, asset_db_writer, minute_bar_writer, daily_bar_writer, adjustment_writer, calendar, start_session, end_s...
A factory for decorators that restrict Term methods to only be callable on Terms with a specific dtype. This is conceptually similar to zipline.utils.input_validation.expect_dtypes, but provides more flexibility for providing error messages that are specifically targeting Term methods. Parameters ...
Returns the daily returns for the given period. Parameters ---------- start : datetime The inclusive starting session label. end : datetime, optional The inclusive ending session label. If not provided, treat ``start`` as a scalar key. Return...
Internal method that pre-calculates the benchmark return series for use in the simulation. Parameters ---------- asset: Asset to use trading_calendar: TradingCalendar trading_days: pd.DateTimeIndex data_portal: DataPortal Notes ----- ...