text stringlengths 81 112k |
|---|
Parse one of the rules as either objectfilter or dottysql.
Example:
_parse_query("5 + 5")
# Returns Sum(Literal(5), Literal(5))
Arguments:
source: A rule in either objectfilter or dottysql syntax.
Returns:
The AST to represent the rule.
def _pa... |
Parse the tagfile and yield tuples of tag_name, list of rule ASTs.
def _parse_tagfile(self):
"""Parse the tagfile and yield tuples of tag_name, list of rule ASTs."""
rules = None
tag = None
for line in self.original:
match = self.TAG_DECL_LINE.match(line)
if matc... |
Normalize both sides, but don't eliminate the expression.
def normalize(expr):
"""Normalize both sides, but don't eliminate the expression."""
lhs = normalize(expr.lhs)
rhs = normalize(expr.rhs)
return type(expr)(lhs, rhs, start=lhs.start, end=rhs.end) |
No elimination, but normalize arguments.
def normalize(expr):
"""No elimination, but normalize arguments."""
args = [normalize(arg) for arg in expr.args]
return type(expr)(expr.func, *args, start=expr.start, end=expr.end) |
Pass through n-ary expressions, and eliminate empty branches.
Variadic and binary expressions recursively visit all their children.
If all children are eliminated then the parent expression is also
eliminated:
(& [removed] [removed]) => [removed]
If only one child is left, it is promoted to repl... |
Remove duplicates from a sequence (of hashable items) while maintaining
order. NOTE: This only works if items in the list are hashable types.
Taken from the Python Cookbook, 3rd ed. Such a great book!
def dedupe(items):
"""Remove duplicates from a sequence (of hashable items) while maintaining
order. ... |
Returns a generator that yields ``datetime.datetime`` objects from
the ``since`` date until ``to`` (default: *now*).
* ``granularity`` -- The granularity at which the generated datetime
objects should be created: seconds, minutes, hourly, daily, weekly,
monthly, or yearly
* ... |
Returns a set of the metric slugs for the given category
def _category_slugs(self, category):
"""Returns a set of the metric slugs for the given category"""
key = self._category_key(category)
slugs = self.r.smembers(key)
return slugs |
Add the ``slug`` to the ``category``. We store category data as
as set, with a key of the form::
c:<category name>
The data is set of metric slugs::
"slug-a", "slug-b", ...
def _categorize(self, slug, category):
"""Add the ``slug`` to the ``category``. We store catego... |
Returns a generator of all possible granularities based on the
MIN_GRANULARITY and MAX_GRANULARITY settings.
def _granularities(self):
"""Returns a generator of all possible granularities based on the
MIN_GRANULARITY and MAX_GRANULARITY settings.
"""
keep = False
for g i... |
Builds an OrderedDict of metric keys and patterns for the given slug
and date.
def _build_key_patterns(self, slug, date):
"""Builds an OrderedDict of metric keys and patterns for the given slug
and date."""
# we want to keep the order, from smallest to largest granularity
patts ... |
Builds redis keys used to store metrics.
* ``slug`` -- a slug used for a metric, e.g. "user-signups"
* ``date`` -- (optional) A ``datetime.datetime`` object used to
generate the time period for the metric. If omitted, the current date
and time (in UTC) will be used.
* ``gran... |
Return a dictionary of metrics data indexed by category:
{<category_name>: set(<slug1>, <slug2>, ...)}
def metric_slugs_by_category(self):
"""Return a dictionary of metrics data indexed by category:
{<category_name>: set(<slug1>, <slug2>, ...)}
"""
result = OrderedDic... |
Removes all keys for the given ``slug``.
def delete_metric(self, slug):
"""Removes all keys for the given ``slug``."""
# To remove all keys for a slug, I need to retrieve them all from
# the set of metric keys, This uses the redis "keys" command, which is
# inefficient, but this should... |
Assigns a specific value to the *current* metric. You can use this
to start a metric at a value greater than 0 or to reset a metric.
The given slug will be used to generate Redis keys at the following
granularities: Seconds, Minutes, Hours, Day, Week, Month, and Year.
Parameters:
... |
Records a metric, creating it if it doesn't exist or incrementing it
if it does. All metrics are prefixed with 'm', and automatically
aggregate for Seconds, Minutes, Hours, Day, Week, Month, and Year.
Parameters:
* ``slug`` -- a unique value to identify the metric; used in
co... |
Get the current values for a metric.
Returns a dictionary with metric values accumulated for the seconds,
minutes, hours, day, week, month, and year.
def get_metric(self, slug):
"""Get the current values for a metric.
Returns a dictionary with metric values accumulated for the seconds... |
Get the metrics for multiple slugs.
Returns a list of two-tuples containing the metric slug and a
dictionary like the one returned by ``get_metric``::
(
some-metric, {
'seconds': 0, 'minutes': 0, 'hours': 0,
'day': 0, 'week': 0, 'mont... |
Get metrics belonging to the given category
def get_category_metrics(self, category):
"""Get metrics belonging to the given category"""
slug_list = self._category_slugs(category)
return self.get_metrics(slug_list) |
Removes the category from Redis. This doesn't touch the metrics;
they simply become uncategorized.
def delete_category(self, category):
"""Removes the category from Redis. This doesn't touch the metrics;
they simply become uncategorized."""
# Remove mapping of metrics-to-category
... |
Resets (or creates) a category containing a list of metrics.
* ``category`` -- A category name
* ``metric_slugs`` -- a list of all metrics that are members of the
category.
def reset_category(self, category, metric_slugs):
"""Resets (or creates) a category containing a list of metr... |
Get history for one or more metrics.
* ``slugs`` -- a slug OR a list of slugs
* ``since`` -- the date from which we start pulling metrics
* ``to`` -- the date until which we start pulling metrics
* ``granularity`` -- seconds, minutes, hourly,
daily, weekly, ... |
Provides the same data as ``get_metric_history``, but in a columnar
format. If you had the following yearly history, for example::
[
('m:bar:y:2012', '1'),
('m:bar:y:2013', '2'),
('m:foo:y:2012', '3'),
('m:foo:y:2013', '4')
... |
Provides the same data as ``get_metric_history``, but with metrics
data arranged in a format that's easy to plot with Chart.js. If you had
the following yearly history, for example::
[
('m:bar:y:2012', '1'),
('m:bar:y:2013', '2'),
('m:bar:y:20... |
Set the value for a Gauge.
* ``slug`` -- the unique identifier (or key) for the Gauge
* ``current_value`` -- the value that the gauge should display
def gauge(self, slug, current_value):
"""Set the value for a Gauge.
* ``slug`` -- the unique identifier (or key) for the Gauge
*... |
Removes all gauges with the given ``slug``.
def delete_gauge(self, slug):
"""Removes all gauges with the given ``slug``."""
key = self._gauge_key(slug)
self.r.delete(key) # Remove the Gauge
self.r.srem(self._gauge_slugs_key, slug) |
Renders a template with a menu to view a metric (or metrics) for a
given number of years.
* ``slugs`` -- A Slug or a set/list of slugs
* ``years`` -- Number of years to show past metrics
* ``link_type`` -- What type of chart do we want ("history" or "aggregate")
* history -- use when displayin... |
Include a Donut Chart for the specified Gauge.
* ``slug`` -- the unique slug for the Gauge.
* ``maximum`` -- The maximum value for the gauge (default is 9000)
* ``size`` -- The size (in pixels) of the gauge (default is 200)
* ``coerce`` -- type to which gauge values should be coerced. The default
... |
Template Tag to display a metric's *current* detail.
* ``slug`` -- the metric's unique slug
* ``with_data_table`` -- if True, prints the raw data in a table.
def metric_detail(slug, with_data_table=False):
"""Template Tag to display a metric's *current* detail.
* ``slug`` -- the metric's unique slug
... |
Template Tag to display a metric's history.
* ``slug`` -- the metric's unique slug
* ``granularity`` -- the granularity: daily, hourly, weekly, monthly, yearly
* ``since`` -- a datetime object or a string string matching one of the
following patterns: "YYYY-mm-dd" for a date or "YYYY-mm-dd HH:MM:SS" ... |
Template Tag to display multiple metrics.
* ``slug_list`` -- A list of slugs to display
* ``with_data_table`` -- if True, prints the raw data in a table.
def aggregate_detail(slug_list, with_data_table=False):
"""Template Tag to display multiple metrics.
* ``slug_list`` -- A list of slugs to display
... |
Template Tag to display history for multiple metrics.
* ``slug_list`` -- A list of slugs to display
* ``granularity`` -- the granularity: seconds, minutes, hourly,
daily, weekly, monthly, yearly
* ``since`` -- a datetime object or a string string matching one of the
following... |
Run 'query' on 'vars' and return the result(s).
Arguments:
query: A query object or string with the query.
replacements: Built-time parameters to the query, either as dict or
as an array (for positional interpolation).
vars: The variables to be supplied to the query solver.
... |
Create an EFILTER-callable version of function 'func'.
As a security precaution, EFILTER will not execute Python callables
unless they implement the IApplicative protocol. There is a perfectly good
implementation of this protocol in the standard library and user functions
can inherit from it.
This... |
Determine the type of the query's output without actually running it.
Arguments:
query: A query object or string with the query.
replacements: Built-time parameters to the query, either as dict or as
an array (for positional interpolation).
root_type: The types of variables to b... |
Yield objects from 'data' that match the 'query'.
def search(query, data, replacements=None):
"""Yield objects from 'data' that match the 'query'."""
query = q.Query(query, params=replacements)
for entry in data:
if solve.solve(query, entry).value:
yield entry |
Look ahead, doesn't affect current_token and next_token.
def peek(self, steps=1):
"""Look ahead, doesn't affect current_token and next_token."""
try:
tokens = iter(self)
for _ in six.moves.range(steps):
next(tokens)
return next(tokens)
except... |
Skip ahead by 'steps' tokens.
def skip(self, steps=1):
"""Skip ahead by 'steps' tokens."""
for _ in six.moves.range(steps):
self.next_token() |
Returns the next logical token, advancing the tokenizer.
def next_token(self):
"""Returns the next logical token, advancing the tokenizer."""
if self.lookahead:
self.current_token = self.lookahead.popleft()
return self.current_token
self.current_token = self._parse_next... |
Will parse patterns until it gets to the next token or EOF.
def _parse_next_token(self):
"""Will parse patterns until it gets to the next token or EOF."""
while self._position < self.limit:
token = self._next_pattern()
if token:
return token
return None |
Parses the next pattern by matching each in turn.
def _next_pattern(self):
"""Parses the next pattern by matching each in turn."""
current_state = self.state_stack[-1]
position = self._position
for pattern in self.patterns:
if current_state not in pattern.states:
... |
Raise a nice error, with the token highlighted.
def _error(self, message, start, end=None):
"""Raise a nice error, with the token highlighted."""
raise errors.EfilterParseError(
source=self.source, start=start, end=end, message=message) |
Emits a token using the current pattern match and pattern label.
def emit(self, string, match, pattern, **_):
"""Emits a token using the current pattern match and pattern label."""
return grammar.Token(name=pattern.name, value=string,
start=match.start(), end=match.end()) |
Get version string by parsing PKG-INFO.
def get_pkg_version():
"""Get version string by parsing PKG-INFO."""
try:
with open("PKG-INFO", "r") as fp:
rgx = re.compile(r"Version: (\d+)")
for line in fp.readlines():
match = rgx.match(line)
if match:
... |
Generates a version string.
Arguments:
dev_version: Generate a verbose development version from git commits.
Examples:
1.1
1.1.dev43 # If 'dev_version' was passed.
def get_version(dev_version=False):
"""Generates a version string.
Arguments:
dev_version: Generate a ve... |
**Purpose**: Method to be executed in the heartbeat thread. This method sends a 'request' to the
heartbeat-req queue. It expects a 'response' message from the 'heartbeart-res' queue within 10 seconds. This
message should contain the same correlation id. If no message if received in 10 seconds, the tmgr ... |
**Purpose**: Method to be run by the tmgr process. This method receives a Task from the pending_queue
and submits it to the RTS. At all state transititons, they are synced (blocking) with the AppManager
in the master process.
In addition, the tmgr also receives heartbeat 'request' msgs from the... |
**Purpose**: Method to start the heartbeat thread. The heartbeat function
is not to be accessed directly. The function is started in a separate
thread using this method.
def start_heartbeat(self):
"""
**Purpose**: Method to start the heartbeat thread. The heartbeat function
is n... |
**Purpose**: Method to terminate the heartbeat thread. This method is
blocking as it waits for the heartbeat thread to terminate (aka join).
This is the last method that is executed from the TaskManager and
hence closes the profiler.
def terminate_heartbeat(self):
"""
**Purpos... |
**Purpose**: Method to terminate the tmgr process. This method is
blocking as it waits for the tmgr process to terminate (aka join).
def terminate_manager(self):
"""
**Purpose**: Method to terminate the tmgr process. This method is
blocking as it waits for the tmgr process to terminat... |
Yields all the values from 'generator_func' and type-checks.
Yields:
Whatever 'generator_func' yields.
Raises:
TypeError: if subsequent values are of a different type than first
value.
ValueError: if subsequent iteration returns a different number o... |
Sorted comparison of values.
def value_eq(self, other):
"""Sorted comparison of values."""
self_sorted = ordered.ordered(self.getvalues())
other_sorted = ordered.ordered(repeated.getvalues(other))
return self_sorted == other_sorted |
Print a detailed audit of all calls to this function.
def call_audit(func):
"""Print a detailed audit of all calls to this function."""
def audited_func(*args, **kwargs):
import traceback
stack = traceback.extract_stack()
r = func(*args, **kwargs)
func_name = func.__name__
... |
See 'class_multimethod'.
def _class_dispatch(args, kwargs):
"""See 'class_multimethod'."""
_ = kwargs
if not args:
raise ValueError(
"Multimethods must be passed at least one positional arg.")
if not isinstance(args[0], type):
raise TypeError(
"class_multimethod... |
Prefer one type over another type, all else being equivalent.
With abstract base classes (Python's abc module) it is possible for
a type to appear to be a subclass of another type without the supertype
appearing in the subtype's MRO. As such, the supertype has no order
with respect to o... |
Finds the best implementation of this function given a type.
This function caches the result, and uses locking for thread safety.
Returns:
Implementing function, in below order of preference:
1. Explicitly registered implementations (through
multimethod.implement... |
Parse the arguments and return a tuple of types to implement for.
Raises:
ValueError or TypeError as appropriate.
def __get_types(for_type=None, for_types=None):
"""Parse the arguments and return a tuple of types to implement for.
Raises:
ValueError or TypeError as app... |
Return a decorator that will register the implementation.
Example:
@multimethod
def add(x, y):
pass
@add.implementation(for_type=int)
def add(x, y):
return x + y
@add.implementation(for_type=SomeType)
def ... |
Registers an implementing function for for_type.
Arguments:
implementation: Callable implementation for this type.
for_type: The type this implementation applies to.
for_types: Same as for_type, but takes a tuple of types.
for_type and for_types cannot both be p... |
Converts the given list of vlues into a list of integers. If the
integer conversion fails (e.g. non-numeric strings or None-values), this
filter will include a 0 instead.
def to_int_list(values):
"""Converts the given list of vlues into a list of integers. If the
integer conversion fails (e.g. non-nume... |
**Purpose**: Validate the resource description provided to the ResourceManager
def _validate_resource_desc(self):
"""
**Purpose**: Validate the resource description provided to the ResourceManager
"""
self._prof.prof('validating rdesc', uid=self._uid)
self._logger.debug('Valida... |
**Purpose**: Populate the ResourceManager class with the validated
resource description
def _populate(self):
"""
**Purpose**: Populate the ResourceManager class with the validated
resource description
"""
if self._validated:
... |
Includes the Gauge slugs and data in the context.
def get_context_data(self, **kwargs):
"""Includes the Gauge slugs and data in the context."""
data = super(GaugesView, self).get_context_data(**kwargs)
data.update({'gauges': get_r().gauge_slugs()})
return data |
Includes the metrics slugs in the context.
def get_context_data(self, **kwargs):
"""Includes the metrics slugs in the context."""
data = super(MetricsListView, self).get_context_data(**kwargs)
# Metrics organized by category, like so:
# { <category_name>: [ <slug1>, <slug2>, ... ]}
... |
Includes the metrics slugs in the context.
def get_context_data(self, **kwargs):
"""Includes the metrics slugs in the context."""
data = super(MetricDetailView, self).get_context_data(**kwargs)
data['slug'] = kwargs['slug']
data['granularities'] = list(get_r()._granularities())
... |
Includes the metrics slugs in the context.
def get_context_data(self, **kwargs):
"""Includes the metrics slugs in the context."""
data = super(MetricHistoryView, self).get_context_data(**kwargs)
# Accept GET query params for ``since``
since = self.request.GET.get('since', None)
... |
Reverses the ``redis_metric_aggregate_detail`` URL using
``self.metric_slugs`` as an argument.
def get_success_url(self):
"""Reverses the ``redis_metric_aggregate_detail`` URL using
``self.metric_slugs`` as an argument."""
slugs = '+'.join(self.metric_slugs)
url = reverse('redis... |
Pull the metrics from the submitted form, and store them as a
list of strings in ``self.metric_slugs``.
def form_valid(self, form):
"""Pull the metrics from the submitted form, and store them as a
list of strings in ``self.metric_slugs``.
"""
self.metric_slugs = [k.strip() for k... |
Includes the metrics slugs in the context.
def get_context_data(self, **kwargs):
"""Includes the metrics slugs in the context."""
r = get_r()
category = kwargs.pop('category', None)
data = super(AggregateDetailView, self).get_context_data(**kwargs)
if category:
slug... |
Includes the metrics slugs in the context.
def get_context_data(self, **kwargs):
"""Includes the metrics slugs in the context."""
r = get_r()
data = super(AggregateHistoryView, self).get_context_data(**kwargs)
slug_set = set(kwargs['slugs'].split('+'))
granularity = kwargs.get('... |
See if this view was called with a specified category.
def get(self, *args, **kwargs):
"""See if this view was called with a specified category."""
self.initial = {"category_name": kwargs.get('category_name', None)}
return super(CategoryFormView, self).get(*args, **kwargs) |
Get the category name/metric slugs from the form, and update the
category so contains the given metrics.
def form_valid(self, form):
"""Get the category name/metric slugs from the form, and update the
category so contains the given metrics."""
form.categorize_metrics()
return su... |
Rerun sets the state of the Pipeline to scheduling so that the Pipeline
can be checked for new stages
def rerun(self):
"""
Rerun sets the state of the Pipeline to scheduling so that the Pipeline
can be checked for new stages
"""
self._state = states.SCHEDULING
... |
Convert current Pipeline (i.e. its attributes) into a dictionary
:return: python dictionary
def to_dict(self):
"""
Convert current Pipeline (i.e. its attributes) into a dictionary
:return: python dictionary
"""
pipeline_desc_as_dict = {
'uid': self._uid,
... |
Create a Pipeline from a dictionary. The change is in inplace.
:argument: python dictionary
:return: None
def from_dict(self, d):
"""
Create a Pipeline from a dictionary. The change is in inplace.
:argument: python dictionary
:return: None
"""
if 'uid'... |
Purpose: Increment stage pointer. Also check if Pipeline has completed.
def _increment_stage(self):
"""
Purpose: Increment stage pointer. Also check if Pipeline has completed.
"""
try:
if self._cur_stage < self._stage_count:
self._cur_stage += 1
... |
Purpose: Decrement stage pointer. Reset completed flag.
def _decrement_stage(self):
"""
Purpose: Decrement stage pointer. Reset completed flag.
"""
try:
if self._cur_stage > 0:
self._cur_stage -= 1
self._completed_flag = threading.Event() #... |
Purpose: Validate whether the argument 'stages' is of list of Stage objects
:argument: list of Stage objects
def _validate_entities(self, stages):
"""
Purpose: Validate whether the argument 'stages' is of list of Stage objects
:argument: list of Stage objects
"""
if no... |
Purpose: Assign a uid to the current object based on the sid passed. Pass the current uid to children of
current object
def _assign_uid(self, sid):
"""
Purpose: Assign a uid to the current object based on the sid passed. Pass the current uid to children of
current object
"""
... |
Purpose: Pass current Pipeline's uid to all Stages.
:argument: List of Stage objects (optional)
def _pass_uid(self):
"""
Purpose: Pass current Pipeline's uid to all Stages.
:argument: List of Stage objects (optional)
"""
for stage in self._stages:
stage.pa... |
Hexdump function by sbz and 7h3rAm on Github:
(https://gist.github.com/7h3rAm/5603718).
:param src: Source, the string to be shown in hexadecimal format
:param length: Number of hex characters to print in one row
:param sep: Unprintable characters representation
:return:
def hexdump(src, length=16,... |
Extracts YANG model from an IETF RFC or draft text file.
This is the main (external) API entry for the module.
:param add_line_refs:
:param source_id: identifier (file name or URL) of a draft or RFC file containing
one or more YANG models
:param srcdir: If source_id points to a file, the opt... |
Prints out a warning message to stderr.
:param s: The warning string to print
:return: None
def warning(self, s):
"""
Prints out a warning message to stderr.
:param s: The warning string to print
:return: None
"""
print(" WARNING: '%s', %s" % (self.src_... |
Prints out an error message to stderr.
:param s: The error string to print
:return: None
def error(self, s):
"""
Prints out an error message to stderr.
:param s: The error string to print
:return: None
"""
print(" ERROR: '%s', %s" % (self.src_id, s), fi... |
This function is a part of the model post-processing pipeline. It
removes leading spaces from an extracted module; depending on the
formatting of the draft/rfc text, may have multiple spaces prepended
to each line. The function also determines the length of the longest
line in the modul... |
This function is a part of the model post-processing pipeline. For
each line in the module, it adds a reference to the line number in
the original draft/RFC from where the module line was extracted.
:param input_model: The YANG model to be processed
:return: Modified YANG model, where li... |
Removes superfluous newlines from a YANG model that was extracted
from a draft or RFC text. Newlines are removed whenever 2 or more
consecutive empty lines are found in the model. This function is a
part of the model post-processing pipeline.
:param input_model: The YANG model to be proc... |
This function defines the order and execution logic for actions
that are performed in the model post-processing pipeline.
:param input_model: The YANG model to be processed in the pipeline
:param add_line_refs: Flag that controls whether line number
references should be added to the ... |
Write a YANG model that was extracted from a source identifier
(URL or source .txt file) to a .yang destination file
:param mdl: YANG model, as a list of lines
:param fn: Name of the YANG model file
:return:
def write_model_to_file(self, mdl, fn):
"""
Write a YANG model ... |
Debug print of the currently parsed line
:param i: The line number of the line that is being currently parsed
:param level: Parser level
:param line: the line that is currently being parsed
:return: None
def debug_print_line(self, i, level, line):
"""
Debug print of the ... |
Debug print indicating that an empty line is being skipped
:param i: The line number of the line that is being currently parsed
:param line: the parsed line
:return: None
def debug_print_strip_msg(self, i, line):
"""
Debug print indicating that an empty line is being skipped
... |
Skip over empty lines
:param content: parsed text
:param i: current parsed line
:return: number of skipped lined
def strip_empty_lines_forward(self, content, i):
"""
Skip over empty lines
:param content: parsed text
:param i: current parsed line
:return: ... |
Strips empty lines preceding the line that is currently being parsed. This
fucntion is called when the parser encounters a Footer.
:param model: lines that were added to the model up to this point
:param line_num: the number of teh line being parsed
:param max_lines_to_strip: max number ... |
Extracts one or more YANG models from an RFC or draft text string in
which the models are specified. The function skips over page
formatting (Page Headers and Footers) and performs basic YANG module
syntax checking. In strict mode, the function also enforces the
<CODE BEGINS> / <CODE END... |
Decorator for retrying method calls, based on instance parameters.
def auto_retry(fun):
"""Decorator for retrying method calls, based on instance parameters."""
@functools.wraps(fun)
def decorated(instance, *args, **kwargs):
"""Wrapper around a decorated function."""
cfg = instance._retry_... |
Extracts used strings from a %(foo)s pattern.
def extract_pattern(fmt):
"""Extracts used strings from a %(foo)s pattern."""
class FakeDict(object):
def __init__(self):
self.seen_keys = set()
def __getitem__(self, key):
self.seen_keys.add(key)
return ''
... |
Generate an isocurve from vertex data in a surface mesh.
Parameters
----------
vertices : ndarray, shape (Nv, 3)
Vertex coordinates.
tris : ndarray, shape (Nf, 3)
Indices of triangular element into the vertices array.
vertex_data : ndarray, shape (Nv,)
data at vertex.
le... |
Set the data
Parameters
----------
vertices : ndarray, shape (Nv, 3) | None
Vertex coordinates.
tris : ndarray, shape (Nf, 3) | None
Indices into the vertex array.
data : ndarray, shape (Nv,) | None
scalar at vertices
def set_data(self, verti... |
Set the color
Parameters
----------
color : instance of Color
The color to use.
def set_color(self, color):
"""Set the color
Parameters
----------
color : instance of Color
The color to use.
"""
if color is not None:
... |
compute LineVisual color from level index and corresponding level
color
def _compute_iso_color(self):
""" compute LineVisual color from level index and corresponding level
color
"""
level_color = []
colors = self._lc
for i, index in enumerate(self._li):
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.