text
stringlengths
81
112k
Select an action according to the action selection strategy of the associated algorithm. If an action has already been selected, raise a ValueError instead. Usage: if match_set.selected_action is None: match_set.select_action() Arguments: None Return...
Setter method for the selected_action property. def _set_selected_action(self, action): """Setter method for the selected_action property.""" assert action in self._action_sets if self._selected_action is not None: raise ValueError("The action has already been selected.") s...
Setter method for the payoff property. def _set_payoff(self, payoff): """Setter method for the payoff property.""" if self._selected_action is None: raise ValueError("The action has not been selected yet.") if self._closed: raise ValueError("The payoff for this match set...
If the predecessor is not None, gives the appropriate amount of payoff to the predecessor in payment for its contribution to this match set's expected future payoff. The predecessor argument should be either None or a MatchSet instance whose selected action led directly to this match set...
Apply the payoff that has been accumulated from immediate reward and/or payments from successor match sets. Attempting to call this method before an action has been selected or after it has already been called for the same match set will result in a ValueError. Usage: ...
Accept a situation (input) and return a MatchSet containing the classifier rules whose conditions match the situation. If appropriate per the algorithm managing this classifier set, create new rules to ensure sufficient coverage of the possible actions. Usage: match_set = mo...
Add a new classifier rule to the classifier set. Return a list containing zero or more rules that were deleted from the classifier by the algorithm in order to make room for the new rule. The rule argument should be a ClassifierRule instance. The behavior of this method depends on whethe...
Remove one or more instances of a rule from the classifier set. Return a Boolean indicating whether the rule's numerosity dropped to zero. (If the rule's numerosity was already zero, do nothing and return False.) Usage: if rule in model and model.discard(rule, count=3): ...
Return the existing version of the given rule. If the rule is not present in the classifier set, return the default. If no default was given, use None. This is useful for eliminating duplicate copies of rules. Usage: unique_rule = model.get(possible_duplicate, possible_dupli...
Run the algorithm, utilizing the classifier set to choose the most appropriate action for each situation produced by the scenario. If learn is True, improve the situation/action mapping to maximize reward. Otherwise, ignore any reward received. Usage: model.run(scenario, lea...
List configuration files detected (and/or examined paths). def ls(system, user, local, include_missing): """List configuration files detected (and/or examined paths).""" # default action is to list *all* auto-detected files if not (system or user or local): system = user = local = True for pa...
Inspect existing configuration/profile. def inspect(config_file, profile): """Inspect existing configuration/profile.""" try: section = load_profile_from_files( [config_file] if config_file else None, profile) click.echo("Configuration file: {}".format(config_file if config_file e...
Create and/or update cloud client configuration file. def create(config_file, profile): """Create and/or update cloud client configuration file.""" # determine the config file path if config_file: click.echo("Using configuration file: {}".format(config_file)) else: # path not given, tr...
Helper method for the ping command that uses `output()` for info output and raises `CLIError()` on handled errors. This function is invariant to output format and/or error signaling mechanism. def _ping(config_file, profile, solver_def, request_timeout, polling_timeout, output): """Helper method for the p...
Ping the QPU by submitting a single-qubit problem. def ping(config_file, profile, solver_def, json_output, request_timeout, polling_timeout): """Ping the QPU by submitting a single-qubit problem.""" now = utcnow() info = dict(datetime=now.isoformat(), timestamp=datetime_to_timestamp(now), code=0) def...
Get solver details. Unless solver name/id specified, fetch and display details for all online solvers available on the configured endpoint. def solvers(config_file, profile, solver_def, list_solvers): """Get solver details. Unless solver name/id specified, fetch and display details for all online...
Submit Ising-formulated problem and return samples. def sample(config_file, profile, solver_def, biases, couplings, random_problem, num_reads, verbose): """Submit Ising-formulated problem and return samples.""" # TODO: de-dup wrt ping def echo(s, maxlen=100): click.echo(s if verbose el...
Return a function that produces samples of a sine. Parameters ---------- samplerate : float The sample rate. params : dict Parameters for FM generation. num_samples : int, optional Number of samples to be generated on each call. def get_input_callback(samplerate, params, nu...
Return a sound playback callback. Parameters ---------- resampler The resampler from which samples are read. samplerate : float The sample rate. params : dict Parameters for FM generation. def get_playback_callback(resampler, samplerate, params): """Return a sound playb...
Setup the resampling and audio output callbacks and start playback. def main(source_samplerate, target_samplerate, params, converter_type): """Setup the resampling and audio output callbacks and start playback.""" from time import sleep ratio = target_samplerate / source_samplerate with sr.CallbackRe...
Returns the maximum number of reads for the given solver parameters. Args: **params: Parameters for the sampling method. Relevant to num_reads: - annealing_time - readout_thermalization - num_reads - programming_therma...
Sample from the specified Ising model. Args: linear (list/dict): Linear terms of the model (h). quadratic (dict of (int, int):float): Quadratic terms of the model (J). **params: Parameters for the sampling method, specified per solver. Returns: :obj:`Fut...
Sample from the specified QUBO. Args: qubo (dict of (int, int):float): Coefficients of a quadratic unconstrained binary optimization (QUBO) model. **params: Parameters for the sampling method, specified per solver. Returns: :obj:`Future` Exa...
Internal method for both sample_ising and sample_qubo. Args: linear (list/dict): Linear terms of the model. quadratic (dict of (int, int):float): Quadratic terms of the model. **params: Parameters for the sampling method, specified per solver. Returns: :...
Reformat some of the parameters for sapi. def _format_params(self, type_, params): """Reformat some of the parameters for sapi.""" if 'initial_state' in params: # NB: at this moment the error raised when initial_state does not match lin/quad (in # active qubits) is not very info...
Test if an Ising model matches the graph provided by the solver. Args: linear (list/dict): Linear terms of the model (h). quadratic (dict of (int, int):float): Quadratic terms of the model (J). Returns: boolean Examples: This example creates a c...
Resume polling for a problem previously submitted. Args: id_: Identification of the query. Returns: :obj: `Future` def _retrieve_problem(self, id_): """Resume polling for a problem previously submitted. Args: id_: Identification of the query. ...
Return the converter type for `identifier`. def _get_converter_type(identifier): """Return the converter type for `identifier`.""" if isinstance(identifier, str): return ConverterType[identifier] if isinstance(identifier, ConverterType): return identifier return ConverterType(identifier...
Resample the signal in `input_data` at once. Parameters ---------- input_data : ndarray Input data. A single channel is provided as a 1D array of `num_frames` length. Input data with several channels is represented as a 2D array of shape (`num_frames`, `num_channels`). For use with ...
Set a new conversion ratio immediately. def set_ratio(self, new_ratio): """Set a new conversion ratio immediately.""" from samplerate.lowlevel import src_set_ratio return src_set_ratio(self._state, new_ratio)
Resample the signal in `input_data`. Parameters ---------- input_data : ndarray Input data. A single channel is provided as a 1D array of `num_frames` length. Input data with several channels is represented as a 2D array of shape (`num_frames`, `num_channels`...
Create new callback resampler. def _create(self): """Create new callback resampler.""" from samplerate.lowlevel import ffi, src_callback_new, src_delete from samplerate.exceptions import ResamplingError state, handle, error = src_callback_new( self._callback, self._converte...
Set the starting conversion ratio for the next `read` call. def set_starting_ratio(self, ratio): """ Set the starting conversion ratio for the next `read` call. """ from samplerate.lowlevel import src_set_ratio if self._state is None: self._create() src_set_ratio(self._state...
Reset state. def reset(self): """Reset state.""" from samplerate.lowlevel import src_reset if self._state is None: self._create() src_reset(self._state)
Read a number of frames from the resampler. Parameters ---------- num_frames : int Number of frames to read. Returns ------- output_data : ndarray Resampled frames as a (`num_output_frames`, `num_channels`) or (`num_output_frames`,) a...
Batch variance calculation. def get_variance(seq): """ Batch variance calculation. """ m = get_mean(seq) return sum((v-m)**2 for v in seq)/float(len(seq))
Batch mean absolute error calculation. def mean_absolute_error(seq, correct): """ Batch mean absolute error calculation. """ assert len(seq) == len(correct) diffs = [abs(a-b) for a, b in zip(seq, correct)] return sum(diffs)/float(len(diffs))
Scales each number in the sequence so that the sum of all numbers equals 1. def normalize(seq): """ Scales each number in the sequence so that the sum of all numbers equals 1. """ s = float(sum(seq)) return [v/s for v in seq]
Describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to X in a normal distribution. http://en.wikipedia.org/wiki/Cumulative_distribution_function def normcdf(x, mu, sigma): """ Describes the probability...
Describes the relative likelihood that a real-valued random variable X will take on a given value. http://en.wikipedia.org/wiki/Probability_density_function def normpdf(x, mu, sigma): """ Describes the relative likelihood that a real-valued random variable X will take on a given value. ...
Calculates the entropy of the attribute attr in given data set data. Parameters: data<dict|list> := if dict, treated as value counts of the given attribute name if list, treated as a raw list from which the value counts will be generated attr<string> := the name of the class attribute ...
Calculates the variance fo a continuous class attribute, to be used as an entropy metric. def entropy_variance(data, class_attr=None, method=DEFAULT_CONTINUOUS_METRIC): """ Calculates the variance fo a continuous class attribute, to be used as an entropy metric. """ assert method in CONTINU...
Calculates the information gain (reduction in entropy) that would result by splitting the data on the chosen attribute (attr). Parameters: prefer_fewer_values := Weights the gain by the count of the attribute's unique values. If multiple attributes have the same gain, but one has s...
Creates a list of all values in the target attribute for each record in the data list object, and returns the value that appears in this list the most frequently. def majority_value(data, class_attr): """ Creates a list of all values in the target attribute for each record in the data list object, ...
Returns the item that appears most frequently in the given list. def most_frequent(lst): """ Returns the item that appears most frequently in the given list. """ lst = lst[:] highest_freq = 0 most_freq = None for val in unique(lst): if lst.count(val) > highest_freq: mos...
Returns a list made up of the unique values found in lst. i.e., it removes the redundant values in lst. def unique(lst): """ Returns a list made up of the unique values found in lst. i.e., it removes the redundant values in lst. """ lst = lst[:] unique_lst = [] # Cycle through the li...
Cycles through all the attributes and returns the attribute with the highest information gain (or lowest entropy). def choose_attribute(data, attributes, class_attr, fitness, method): """ Cycles through all the attributes and returns the attribute with the highest information gain (or lowest entropy). ...
Returns a new decision tree based on the examples given. def create_decision_tree(data, attributes, class_attr, fitness_func, wrapper, **kwargs): """ Returns a new decision tree based on the examples given. """ split_attr = kwargs.get('split_attr', None) split_val = kwargs.get('split_val', Non...
Increments the count for the given element. def add(self, k, count=1): """ Increments the count for the given element. """ self.counts[k] += count self.total += count
Returns the element with the highest probability. def best(self): """ Returns the element with the highest probability. """ b = (-1e999999, None) for k, c in iteritems(self.counts): b = max(b, (c, k)) return b[1]
Returns a list of probabilities for all elements in the form [(value1,prob1),(value2,prob2),...]. def probs(self): """ Returns a list of probabilities for all elements in the form [(value1,prob1),(value2,prob2),...]. """ return [ (k, self.counts[k]/float(self...
Adds the given distribution's counts to the current distribution. def update(self, dist): """ Adds the given distribution's counts to the current distribution. """ assert isinstance(dist, DDist) for k, c in iteritems(dist.counts): self.counts[k] += c self.tot...
Returns the probability of a random variable being less than the given value. def probability_lt(self, x): """ Returns the probability of a random variable being less than the given value. """ if self.mean is None: return return normdist(x=x, mu=self....
Returns the probability of a random variable falling between the given values. def probability_in(self, a, b): """ Returns the probability of a random variable falling between the given values. """ if self.mean is None: return p1 = normdist(x=a, mu=se...
Returns the probability of a random variable being greater than the given value. def probability_gt(self, x): """ Returns the probability of a random variable being greater than the given value. """ if self.mean is None: return p = normdist(x=x, mu=se...
Returns a copy of the object without any data. def copy_no_data(self): """ Returns a copy of the object without any data. """ return type(self)( [], order=list(self.header_modes), types=self.header_types.copy(), modes=self.header_modes.cop...
Returns true if the given value matches the type for the given name according to the schema. Returns false otherwise. def is_valid(self, name, value): """ Returns true if the given value matches the type for the given name according to the schema. Returns false otherwise...
When a CSV file is given, extracts header information the file. Otherwise, this header data must be explicitly given when the object is instantiated. def _read_header(self): """ When a CSV file is given, extracts header information the file. Otherwise, this header data must be e...
Ensure each element in the row matches the schema. def validate_row(self, row): """ Ensure each element in the row matches the schema. """ clean_row = {} if isinstance(row, (tuple, list)): assert self.header_order, "No attribute order specified." assert l...
Returns two Data instances, containing the data randomly split between the two according to the given ratio. The first instance will contain the ratio of data specified. The second instance will contain the remaining ratio of data. If leave_one_out is True, the ratio wi...
Gets the closest value for the current node's attribute matching the given record. def _get_attribute_value_for_node(self, record): """ Gets the closest value for the current node's attribute matching the given record. """ # Abort if this node has not get split ...
Retrieves the unique set of values seen for the given attribute at this node. def get_values(self, attr_name): """ Retrieves the unique set of values seen for the given attribute at this node. """ ret = list(self._attr_value_cdist[attr_name].keys()) \ + list(...
Returns the name of the attribute with the highest gain. def get_best_splitting_attr(self): """ Returns the name of the attribute with the highest gain. """ best = (-1e999999, None) for attr in self.attributes: best = max(best, (self.get_gain(attr), attr)) be...
Calculates the entropy of a specific attribute/value combination. def get_entropy(self, attr_name=None, attr_value=None): """ Calculates the entropy of a specific attribute/value combination. """ is_con = self.tree.data.is_continuous_class if is_con: if attr_name is ...
Calculates the information gain from splitting on the given attribute. def get_gain(self, attr_name): """ Calculates the information gain from splitting on the given attribute. """ subset_entropy = 0.0 for value in iterkeys(self._attr_value_counts[attr_name]): value_...
Returns the class value probability distribution of the given attribute value. def get_value_ddist(self, attr_name, attr_value): """ Returns the class value probability distribution of the given attribute value. """ assert not self.tree.data.is_continuous_class, \ ...
Returns the value probability of the given attribute at this node. def get_value_prob(self, attr_name, value): """ Returns the value probability of the given attribute at this node. """ if attr_name not in self._attr_value_count_totals: return n = self._attr_value_co...
Returns the estimated value of the class attribute for the given record. def predict(self, record, depth=0): """ Returns the estimated value of the class attribute for the given record. """ # Check if we're ready to predict. if not self.ready_to_predict:...
Returns true if this node is ready to branch off additional nodes. Returns false otherwise. def ready_to_split(self): """ Returns true if this node is ready to branch off additional nodes. Returns false otherwise. """ # Never split if we're a leaf that predicts adequatel...
Sets the probability distribution at a leaf node. def set_leaf_dist(self, attr_value, dist): """ Sets the probability distribution at a leaf node. """ assert self.attr_name assert self.tree.data.is_valid(self.attr_name, attr_value), \ "Value %s is invalid for attribu...
Incrementally update the statistics at this node. def train(self, record): """ Incrementally update the statistics at this node. """ self.n += 1 class_attr = self.tree.data.class_attribute_name class_value = record[class_attr] # Update class statistics. ...
Constructs a classification or regression tree in a single batch by analyzing the given data. def build(cls, data, *args, **kwargs): """ Constructs a classification or regression tree in a single batch by analyzing the given data. """ assert isinstance(data, Data) ...
Returns the mean absolute error for predictions on the out-of-bag samples. def out_of_bag_mae(self): """ Returns the mean absolute error for predictions on the out-of-bag samples. """ if not self._out_of_bag_mae_clean: try: self._out_of_bag_ma...
Returns the out-of-bag samples list, inside a wrapper to keep track of modifications. def out_of_bag_samples(self): """ Returns the out-of-bag samples list, inside a wrapper to keep track of modifications. """ #TODO:replace with more a generic pass-through wrapper? ...
Sets the behavior for one or all attributes to use when traversing the tree using a query vector and it encounters a branch that does not exist. def set_missing_value_policy(self, policy, target_attr_name=None): """ Sets the behavior for one or all attributes to use when traversing the ...
Incrementally updates the tree with the given sample record. def train(self, record): """ Incrementally updates the tree with the given sample record. """ assert self.data.class_attribute_name in record, \ "The class attribute must be present in the record." record =...
Removes trees from the forest according to the specified fell method. def _fell_trees(self): """ Removes trees from the forest according to the specified fell method. """ if callable(self.fell_method): for tree in self.fell_method(list(self.trees)): self.tree...
Gets the prediction from the tree with the lowest mean absolute error. def _get_best_prediction(self, record, train=True): """ Gets the prediction from the tree with the lowest mean absolute error. """ if not self.trees: return best = (+1e999999, None) for tr...
Returns weights so that the tree with smallest out-of-bag mean absolute error def best_oob_mae_weight(trees): """ Returns weights so that the tree with smallest out-of-bag mean absolute error """ best = (+1e999999, None) for tree in trees: oob_mae = tree.out_of_bag_m...
Returns weights proportional to the out-of-bag mean absolute error for each tree. def mean_oob_mae_weight(trees): """ Returns weights proportional to the out-of-bag mean absolute error for each tree. """ weights = [] active_trees = [] for tree in trees: oob_m...
Adds new trees to the forest according to the specified growth method. def _grow_trees(self): """ Adds new trees to the forest according to the specified growth method. """ if self.grow_method == GROW_AUTO_INCREMENTAL: self.tree_kwargs['auto_grow'] = True wh...
Attempts to predict the value of the class attribute by aggregating the predictions of each tree. Parameters: weighting_formula := a callable that takes a list of trees and returns a list of weights. def predict(self, record): """ Attempts to predict...
Updates the trees with the given training record. def train(self, record): """ Updates the trees with the given training record. """ self._fell_trees() self._grow_trees() for tree in self.trees: if random.random() < self.sample_ratio: tree.tra...
Return a list of local configuration file paths. Search paths for configuration files on the local system are based on homebase_ and depend on operating system; for example, for Linux systems these might include ``dwave.conf`` in the current working directory (CWD), user-local ``.config/dwave/``, and s...
Return the default configuration-file path. Typically returns a user-local configuration file; e.g: ``~/.config/dwave/dwave.conf``. Returns: str: Configuration file path. Examples: This example displays the default configuration file on an Ubuntu Unix system runnin...
Load D-Wave Cloud Client configuration from a list of files. .. note:: This method is not standardly used to set up D-Wave Cloud Client configuration. It is recommended you use :meth:`.Client.from_config` or :meth:`.config.load_config` instead. Configuration files comply with standard Windows ...
Load a profile from a list of D-Wave Cloud Client configuration files. .. note:: This method is not standardly used to set up D-Wave Cloud Client configuration. It is recommended you use :meth:`.Client.from_config` or :meth:`.config.load_config` instead. Configuration files comply with standar...
Load D-Wave Cloud Client configuration based on a configuration file. Configuration values can be specified in multiple ways, ranked in the following order (with 1 the highest ranked): 1. Values specified as keyword arguments in :func:`load_config()`. These values replace values read from a configu...
Load configured URLs and token for the SAPI server. .. warning:: Included only for backward compatibility. Please use :func:`load_config` or the client factory :meth:`~dwave.cloud.client.Client.from_config` instead. This method tries to load a legacy configuration file from ``~/.dwrc``, select...
Check whether `data` is a valid input/output for libsamplerate. Returns ------- num_frames Number of frames in `data`. channels Number of channels in `data`. Raises ------ ValueError: If invalid data is supplied. def _check_data(data): """Check whether `data` is a ...
Perform a single conversion from an input buffer to an output buffer. Simple interface for performing a single conversion from input buffer to output buffer at a fixed conversion ratio. Simple interface does not require initialisation as it can only operate on a single buffer worth of audio. def src_simpl...
Initialise a new sample rate converter. Parameters ---------- converter_type : int Converter to be used. channels : int Number of channels. Returns ------- state An anonymous pointer to the internal state of the converter. error : int Error code. def sr...
Standard processing function. Returns non zero on error. def src_process(state, input_data, output_data, ratio, end_of_input=0): """Standard processing function. Returns non zero on error. """ input_frames, _ = _check_data(input_data) output_frames, _ = _check_data(output_data) data = ffi...
Internal callback function to be used with the callback API. Pulls the Python callback function from the handle contained in `cb_data` and calls it to fetch frames. Frames are converted to the format required by the API (float, interleaved channels). A reference to these data is kept internally. R...
Initialisation for the callback based API. Parameters ---------- callback : function Called whenever new frames are to be read. Must return a NumPy array of shape (num_frames, channels). converter_type : int Converter to be used. channels : int Number of channels. ...
Read up to `frames` worth of data using the callback API. Returns ------- frames : int Number of frames read or -1 on error. def src_callback_read(state, ratio, frames, data): """Read up to `frames` worth of data using the callback API. Returns ------- frames : int Number ...
Client factory method to instantiate a client instance from configuration. Configuration values can be specified in multiple ways, ranked in the following order (with 1 the highest ranked): 1. Values specified as keyword arguments in :func:`from_config()` 2. Values specified as environ...
Perform a clean shutdown. Waits for all the currently scheduled work to finish, kills the workers, and closes the connection pool. .. note:: Ensure your code does not submit new work while the connection is closing. Where possible, it is recommended you use a context manager (a :code:...
Return a filtered list of solvers handled by this client. Args: refresh (bool, default=False): Force refresh of cached list of solvers/properties. order_by (callable/str/None, default='avg_load'): Solver sorting key function (or :class:`Solver` attribute...
Deprecated in favor of :meth:`.get_solvers`. def solvers(self, refresh=False, **filters): """Deprecated in favor of :meth:`.get_solvers`.""" warnings.warn("'solvers' is deprecated in favor of 'get_solvers'.", DeprecationWarning) return self.get_solvers(refresh=refresh, **filters)