text stringlengths 81 112k |
|---|
Create a :class:`~turicreate.toolkits.SupervisedLearningModel`,
This is generic function that allows you to create any model that
implements SupervisedLearningModel. This function is normally not called, call
specific model's create function instead.
Parameters
----------
dataset : SFrame
... |
Return predictions for ``dataset``, using the trained supervised_learning
model. Predictions are generated as class labels (0 or
1).
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the same
names as the featur... |
Evaluate the model by making predictions of target values and comparing
these to actual values.
Parameters
----------
dataset : SFrame
Dataset in the same format used for training. The columns names and
types of the dataset must be the same as that used in traini... |
Return predictions for ``dataset``, using the trained supervised_learning
model. Predictions are generated as class labels (0 or
1).
Parameters
----------
dataset: SFrame
Dataset of new observations. Must include columns with the same
names as the feature... |
Evaluate the model on the given dataset.
Parameters
----------
dataset : SFrame
Dataset in the same format used for training. The columns names and
types of the dataset must be the same as that used in training.
metric : str, optional
Name of the eva... |
Predict the target column of the given dataset.
The target column is provided during
:func:`~turicreate.boosted_trees_regression.create`. If the target column is in the
`dataset` it will be ignored.
Parameters
----------
dataset : SFrame
A dataset that has the... |
Disassemble a code object.
def print_code(co, lasti= -1, level=0):
"""Disassemble a code object."""
code = co.co_code
for constant in co.co_consts:
print( '| |' * level, end=' ')
print( 'constant:', constant)
labels = findlabels(code)
linestarts = dict(fin... |
Convert a decision tree model to protobuf format.
Parameters
----------
decision_tree : DecisionTreeRegressor
A trained scikit-learn tree model.
feature_names: [str]
Name of the input columns.
target: str
Name of the output column.
Returns
-------
model_spec: ... |
Check that the predictionsa are either probabilities of prob-vectors.
def _check_prob_and_prob_vector(predictions):
"""
Check that the predictionsa are either probabilities of prob-vectors.
"""
from .._deps import numpy
ptype = predictions.dtype
import array
if ptype not in [float, numpy.n... |
Perform basic error checking for the evaluation metrics. Check
types and sizes of the inputs.
def _supervised_evaluation_error_checking(targets, predictions):
"""
Perform basic error checking for the evaluation metrics. Check
types and sizes of the inputs.
"""
_raise_error_if_not_sarray(targets... |
r"""
Compute the logloss for the given targets and the given predicted
probabilities. This quantity is defined to be the negative of the sum
of the log probability of each observation, normalized by the number of
observations:
.. math::
\textrm{logloss} = - \frac{1}{N} \sum_{i \in 1,\ldots... |
r"""
Compute the maximum absolute deviation between two SArrays.
Parameters
----------
targets : SArray[float or int]
An Sarray of ground truth target values.
predictions : SArray[float or int]
The prediction that corresponds to each target value.
This vector must have the ... |
r"""
Compute the root mean squared error between two SArrays.
Parameters
----------
targets : SArray[float or int]
An Sarray of ground truth target values.
predictions : SArray[float or int]
The prediction that corresponds to each target value.
This vector must have the sam... |
r"""
Compute the confusion matrix for classifier predictions.
Parameters
----------
targets : SArray
Ground truth class labels (cannot be of type float).
predictions : SArray
The prediction that corresponds to each target value.
This vector must have the same length as ``ta... |
r"""
Compute the accuracy score; which measures the fraction of predictions made
by the classifier that are exactly correct. The score lies in the range [0,1]
with 0 being the worst and 1 being the best.
Parameters
----------
targets : SArray
An SArray of ground truth class labels. Can ... |
r"""
Compute the F-beta score. The F-beta score is the weighted harmonic mean of
precision and recall. The score lies in the range [0,1] with 1 being ideal
and 0 being the worst.
The `beta` value is the weight given to `precision` vs `recall` in the
combined score. `beta=0` considers only precision... |
r"""
Compute the F1 score (sometimes known as the balanced F-score or
F-measure). The F1 score is commonly interpreted as the average of
precision and recall. The score lies in the range [0,1] with 1 being ideal
and 0 being the worst.
The F1 score is defined as:
.. math::
f_{1}... |
r"""
Compute the precision score for classification tasks. The precision score
quantifies the ability of a classifier to not label a `negative` example as
`positive`. The precision score can be interpreted as the probability that
a `positive` prediction made by the classifier is `positive`. The score i... |
r"""
Compute the area under the ROC curve for the given targets and predictions.
Parameters
----------
targets : SArray
An SArray containing the observed values. For binary classification,
the alpha-numerically first category is considered the reference
category.
prediction... |
Fetches the meta data for the current library. The data could be in
the superlib meta data file. If we can't find the data None is returned.
def get_library_meta(self):
'''
Fetches the meta data for the current library. The data could be in
the superlib meta data file. If we can't find ... |
Convert a trained XGBoost model to Core ML format.
Parameters
----------
decision_tree : Booster
A trained XGboost tree model.
feature_names: [str] | str
Names of input features that will be exposed in the Core ML model
interface.
Can be set to one of the following:
... |
Dumps a serializable object to JSON. This API maps to the Python built-in
json dumps method, with a few differences:
* The return value is always valid JSON according to RFC 7159.
* The input can be any of the following types:
- SFrame
- SArray
- SGraph
- single flexible_typ... |
Visualizes drawings (ground truth or predictions) by
returning images to represent the stroke-based data from
the user.
Parameters
----------
stroke_based_drawings: SArray or list
An `SArray` of type `list`. Each element in the SArray
should be a list of strokes, where each stroke... |
Fit a transformer using the SFrame `data`.
Parameters
----------
data : SFrame
The data used to fit the transformer.
Returns
-------
self (A fitted version of the object)
See Also
--------
transform, fit_transform
Examples
... |
Returns a structured description of the model, including (where
relevant) the schema of the training data, description of the training
data, training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections... |
For each example in the dataset, extract the leaf indices of
each tree as features.
For multiclass classification, each leaf index contains #num_class
numbers.
The returned feature vectors can be used as input to train another
supervised learning model such as a
:py:cla... |
Extract features along with all the missing features associated with
a dataset.
Parameters
----------
dataset: bool
Dataset on which to make predictions.
missing_value_action: str, optional
Action to perform when missing values are encountered. This can ... |
Dump the models into a list of strings. Each
string is a text representation of a tree.
Parameters
----------
with_stats : bool
If true, include node statistics in the output.
Returns
-------
out : SFrame
A table with two columns: feature... |
Dump the models into a list of strings. Each
string is a text representation of a tree.
Parameters
----------
with_stats : bool
If true, include node statistics in the output.
Returns
-------
out : SFrame
A table with two columns: feature... |
Returns a structured description of the model, including (where relevant)
the schema of the training data, description of the training data,
training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections... |
Convert a boosted tree model to protobuf format.
Parameters
----------
decision_tree : GradientBoostingRegressor
A trained scikit-learn tree model.
input_feature: [str]
Name of the input columns.
output_features: str
Name of the output column.
Returns
-------
... |
Sort a dictionary of classes and corresponding vote totals according to the
votes, then truncate to the highest 'k' classes.
def _sort_topk_votes(x, k):
"""
Sort a dictionary of classes and corresponding vote totals according to the
votes, then truncate to the highest 'k' classes.
"""
y = sorte... |
Construct a composite distance function for a set of features, based on the
types of those features.
NOTE: This function is very similar to
`:func:_nearest_neighbors.choose_auto_distance`. The function is separate
because the auto-distance logic different than for each nearest
neighbors-based toolk... |
Create a
:class:`~turicreate.nearest_neighbor_classifier.NearestNeighborClassifier`
model. This model predicts the class of a query instance by finding the most
common class among the query's nearest neighbors.
.. warning::
The 'dot_product' distance is deprecated and will be removed in future... |
A function to load a previously saved NearestNeighborClassifier model.
Parameters
----------
unpickler : GLUnpickler
A GLUnpickler file handler.
version : int
Version number maintained by the class writer.
def _load_version(cls, state, version):
"""
... |
Return the predicted class for each observation in *dataset*. This
prediction is made based on the closest neighbors stored in the nearest
neighbors classifier model.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the sa... |
Return predicted class labels for instances in *dataset*. This model
makes predictions based on the closest neighbors stored in the nearest
neighbors classifier model.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include the features used ... |
Return top-k most likely predictions for each observation in
``dataset``. Predictions are returned as an SFrame with three columns:
`row_id`, `class`, and `probability`.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include the features use... |
Evaluate the model's predictive accuracy. This is done by predicting the
target class for instances in a new dataset and comparing to known
target values.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the same
... |
A compact version of __repr__ for each of the steps.
def _compact_class_repr(obj):
""" A compact version of __repr__ for each of the steps.
"""
dict_str_list = []
post_repr_string = ""
# If features are present, then shorten it.
init_func = obj.__init__
if _sys.... |
Internal function to perform fit_transform() on all but last step.
def _preprocess(self, data):
"""
Internal function to perform fit_transform() on all but last step.
"""
transformed_data = _copy(data)
for name, step in self._transformers[:-1]:
transformed_data = ste... |
Fits a transformer using the SFrame `data`.
Parameters
----------
data : SFrame
The data used to fit the transformer.
Returns
-------
self (A fitted object)
See Also
--------
transform, fit_transform
Examples
-------... |
First fit a transformer using the SFrame `data` and then return a transformed
version of `data`.
Parameters
----------
data : SFrame
The data used to fit the transformer. The same data is then also
transformed.
Returns
-------
Transformed... |
Transform the SFrame `data` using a fitted model.
Parameters
----------
data : SFrame
The data to be transformed.
Returns
-------
A transformed SFrame.
Returns
-------
out: SFrame
A transformed SFrame.
See Also
... |
An function to load an object with a specific version of the class.
Parameters
----------
pickler : file
A GLUnpickler file handle.
version : int
A version number as maintained by the class writer.
def _load_version(cls, unpickler, version):
"""
... |
Compute the PageRank for each vertex in the graph. Return a model object
with total PageRank as well as the PageRank value for each vertex in the
graph.
Parameters
----------
graph : SGraph
The graph on which to compute the pagerank value.
reset_probability : float, optional
Pr... |
Initializes the gcc toolset for the given version. If necessary, command may
be used to specify where the compiler is located. The parameter 'options' is a
space-delimited list of options, each one specified as
<option-name>option-value. Valid option names are: cxxflags, linkflags and
li... |
Now, the vendor specific flags.
The parameter linker can be either gnu, darwin, osf, hpux or sun.
def init_link_flags(toolset, linker, condition):
"""
Now, the vendor specific flags.
The parameter linker can be either gnu, darwin, osf, hpux or sun.
"""
toolset_link = toolset + '.lin... |
Adds a dependency from 'targets' to 'sources'
Both 'targets' and 'sources' can be either list
of target names, or a single target name.
def add_dependency (self, targets, sources):
"""Adds a dependency from 'targets' to 'sources'
Both 'targets' and 'sources' can be either list
... |
Gets the value of `variable` on set on the first target in `targets`.
Args:
targets (str or list): one or more targets to get the variable from.
variable (str): the name of the variable
Returns:
the value of `variable` set on `targets` (list)
Example:
... |
Sets a target variable.
The 'variable' will be available to bjam when it decides
where to generate targets, and will also be available to
updating rule for that 'taret'.
def set_target_variable (self, targets, variable, value, append=0):
""" Sets a target variable.
The 'variab... |
Binds a target to the corresponding update action.
If target needs to be updated, the action registered
with action_name will be used.
The 'action_name' must be previously registered by
either 'register_action' or 'register_bjam_action'
method.
def set_update... |
Creates a new build engine action.
Creates on bjam side an action named 'action_name', with
'command' as the command to be executed, 'bound_variables'
naming the list of variables bound when the command is executed
and specified flag.
If 'function' is not None, it should be a ca... |
Informs self that 'action_name' is declared in bjam.
From this point, 'action_name' is a valid argument to the
set_update_action method. The action_name should be callable
in the global module of bjam.
def register_bjam_action (self, action_name, function=None):
"""Informs self that 'a... |
Returns the pixel data stored in the Image object.
Returns
-------
out : numpy.array
The pixel data of the Image object. It returns a multi-dimensional
numpy array, where the shape of the array represents the shape of
the image (height, weight, channels).
... |
Displays the image. Requires PIL/Pillow.
Alternatively, you can create an :class:`turicreate.SArray` of this image
and use py:func:`turicreate.SArray.show()`
See Also
--------
turicreate.image_analysis.resize
Examples
--------
>>> img = turicreate.Image... |
Return predictions for the model. The kwargs gets passed into the
model as a dictionary.
Parameters
----------
data : dict[str, value]
Dictionary of data to make predictions from where the keys are
the names of the input features.
useCPUOnly : bool
... |
Visualize the model.
Parameters
----------
port : int
if server is to be hosted on specific localhost port
input_shape_dict : dict
The shapes are calculated assuming the batch and sequence
are 1 i.e. (1, 1, C, H, W). If ei... |
Construct composite distance parameters based on selected features and their
types.
def _construct_auto_distance(feature_names, column_names, column_types, sample):
"""
Construct composite distance parameters based on selected features and their
types.
"""
## Make a dictionary from the column_... |
Create a nearest neighbor model, which can be searched efficiently and
quickly for the nearest neighbors of a query observation. If the `method`
argument is specified as `auto`, the type of model is chosen automatically
based on the type of data in `dataset`.
.. warning::
The 'dot_product' dis... |
Returns a structured description of the model, including (where
relevant) the schema of the training data, description of the training
data, training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections... |
List the fields stored in the model, including data, model, and
training options. Each field can be queried with the ``get`` method.
Returns
-------
out : list
List of fields queryable with the ``get`` method.
def _list_fields(self):
"""
List the fields stor... |
Return the value of a given field. The list of all queryable fields is
detailed below, and can be obtained with the
:func:`~turicreate.nearest_neighbors.NearestNeighborsModel._list_fields`
method.
+-----------------------+----------------------------------------------+
| Fi... |
Return a dictionary of statistics collected during creation of the
model. These statistics are also available with the ``get`` method and
are described in more detail in that method's documentation.
Returns
-------
out : dict
Dictionary of statistics compiled during ... |
For each row of the input 'dataset', retrieve the nearest neighbors
from the model's stored data. In general, the query dataset does not
need to be the same as the reference data stored in the model, but if
it is, the 'include_self_edges' parameter can be set to False to
exclude results ... |
Construct the similarity graph on the reference dataset, which is
already stored in the model. This is conceptually very similar to
running `query` with the reference set, but this method is optimized
for the purpose, syntactically simpler, and automatically removes
self-edges.
... |
Randomly split an SFrame into two SFrames based on the `session_id` such
that one split contains data for a `fraction` of the sessions while the
second split contains all data for the rest of the sessions.
Parameters
----------
dataset : SFrame
Dataset to split. It must contain a column of ... |
Reads the MS Build XML file at the path and returns its contents.
Keyword arguments:
values -- The map to append the contents to (default {})
def read_msbuild_xml(path, values={}):
"""Reads the MS Build XML file at the path and returns its contents.
Keyword arguments:
values -- The map to append ... |
Reads the MS Build JSON file at the path and returns its contents.
Keyword arguments:
values -- The list to append the contents to (default [])
def read_msbuild_json(path, values=[]):
"""Reads the MS Build JSON file at the path and returns its contents.
Keyword arguments:
values -- The list to ap... |
Script entrypoint.
def main():
"""Script entrypoint."""
# Parse the arguments
parser = argparse.ArgumentParser(
description='Convert MSBuild XML to JSON format')
parser.add_argument(
'-t', '--toolchain', help='The name of the toolchain', required=True)
parser.add_argument(
... |
Merges the values between the current and previous run of the script.
def __merge_json_values(current, previous):
"""Merges the values between the current and previous run of the script."""
for value in current:
name = value['name']
# Find the previous value
previous_value = __find_and... |
Finds the value in the list that corresponds with the value of compare.
def __find_and_remove_value(list, compare):
"""Finds the value in the list that corresponds with the value of compare."""
# next throws if there are no matches
try:
found = next(value for value in list
if v... |
Converts the tag type found in the root and converts them using the func
and appends them to the values.
def __convert(root, tag, values, func):
"""Converts the tag type found in the root and converts them using the func
and appends them to the values.
"""
elements = root.getElementsByTagName(tag)
... |
Converts an EnumProperty node to JSON format.
def __convert_enum(node):
"""Converts an EnumProperty node to JSON format."""
name = __get_attribute(node, 'Name')
logging.debug('Found EnumProperty named %s', name)
converted_values = []
for value in node.getElementsByTagName('EnumValue'):
co... |
Converts an BoolProperty node to JSON format.
def __convert_bool(node):
"""Converts an BoolProperty node to JSON format."""
converted = __convert_node(node, default_value='true')
# Check for a switch for reversing the value
reverse_switch = __get_attribute(node, 'ReverseSwitch')
if reverse_switch... |
Converts a StringListProperty node to JSON format.
def __convert_string_list(node):
"""Converts a StringListProperty node to JSON format."""
converted = __convert_node(node)
# Determine flags for the string list
flags = vsflags(VSFlags.UserValue)
# Check for a separator to determine if it is semi... |
Converts a StringProperty node to JSON format.
def __convert_string(node):
"""Converts a StringProperty node to JSON format."""
converted = __convert_node(node, default_flags=vsflags(VSFlags.UserValue))
return __check_for_flag(converted) |
Converts a XML node to a JSON equivalent.
def __convert_node(node, default_value='', default_flags=vsflags()):
"""Converts a XML node to a JSON equivalent."""
name = __get_attribute(node, 'Name')
logging.debug('Found %s named %s', node.tagName, name)
converted = {}
converted['name'] = name
con... |
Modifies the flags in value if the node contains an Argument.
def __with_argument(node, value):
"""Modifies the flags in value if the node contains an Argument."""
arguments = node.getElementsByTagName('Argument')
if arguments:
logging.debug('Found argument within %s', value['name'])
value... |
Preprocesses occurrences of Argument within the root.
Argument XML values reference other values within the document by name. The
referenced value does not contain a switch. This function will add the
switch associated with the argument.
def __preprocess_arguments(root):
"""Preprocesses occurrences of... |
Retrieves the attribute of the given name from the node.
If not present then the default_value is used.
def __get_attribute(node, name, default_value=''):
"""Retrieves the attribute of the given name from the node.
If not present then the default_value is used.
"""
if node.hasAttribute(name):
... |
Gets the path to the file.
def __get_path(path):
"""Gets the path to the file."""
if not os.path.isabs(path):
path = os.path.join(os.getcwd(), path)
return os.path.normpath(path) |
Gets the output path for a file given the toolchain, rule and output_dir
def __output_path(toolchain, rule, output_dir):
"""Gets the output path for a file given the toolchain, rule and output_dir"""
filename = '%s_%s.json' % (toolchain, rule)
return os.path.join(output_dir, filename) |
Writes a JSON file at the path with the values provided.
def __write_json_file(path, values):
"""Writes a JSON file at the path with the values provided."""
# Sort the keys to ensure ordering
sort_order = ['name', 'switch', 'comment', 'value', 'flags']
sorted_values = [
OrderedDict(
... |
Appends the value to the list.
def __append_list(append_to, value):
"""Appends the value to the list."""
if value is not None:
if isinstance(value, list):
append_to.extend(value)
else:
append_to.append(value) |
Decompile a function into ast.FunctionDef node.
:param func: python function (can not be a built-in)
:return: ast.FunctionDef instance.
def decompile_func(func):
'''
Decompile a function into ast.FunctionDef node.
:param func: python function (can not be a built-in)
:return:... |
Compile a function from an ast.FunctionDef instance.
:param ast_node: ast.FunctionDef instance
:param filename: path where function source can be found.
:param globals: will be used as func_globals
:return: A python function object
def compile_func(ast_node, filename, globals, **defaults):
... |
decompile apython pyc or pyo binary file.
:param bin_pyc: input file objects
:param output: output file objects
def decompile_pyc(bin_pyc, output=sys.stdout):
'''
decompile apython pyc or pyo binary file.
:param bin_pyc: input file objects
:param output: output file objects
'''
... |
Consumes input extracting definitions.
Args:
a_file: The file like stream to parse.
Raises:
PDDMError if there are any issues.
def ParseInput(self, a_file):
"""Consumes input extracting definitions.
Args:
a_file: The file like stream to parse.
Raises:
PDDMError if there ... |
Parses list of lines.
Args:
input_lines: A list of strings of input to parse (no newlines on the
strings).
Raises:
PDDMError if there are any issues.
def ParseLines(self, input_lines):
"""Parses list of lines.
Args:
input_lines: A list of strings of input to pars... |
Expands the macro reference.
Args:
macro_ref_str: String of a macro reference (i.e. foo(a, b)).
Returns:
The text from the expansion.
Raises:
PDDMError if there are any issues.
def Expand(self, macro_ref_str):
"""Expands the macro reference.
Args:
macro_ref_str: String o... |
Processes the file contents.
def ProcessContent(self, strip_expansion=False):
"""Processes the file contents."""
self._ParseFile()
if strip_expansion:
# Without a collection the expansions become blank, removing them.
collection = None
else:
collection = MacroCollection()
for sect... |
Convert a _imputer model to the protobuf spec.
Parameters
----------
model: Imputer
A trained Imputer model.
input_features: str
Name of the input column.
output_features: str
Name of the output column.
Returns
-------
model_spec: An object of type Model_pb.
... |
Clear the module state. This is mainly for testing purposes.
def reset ():
""" Clear the module state. This is mainly for testing purposes.
"""
global __all_attributes, __all_features, __implicit_features, __composite_properties
global __subfeature_from_value, __all_top_features, __free_features
gl... |
Declares a new feature with the given name, values, and attributes.
name: the feature name
values: a sequence of the allowable values - may be extended later with feature.extend
attributes: a sequence of the feature's attributes (e.g. implicit, free, propagated, ...)
def feature (name, values, ... |
Sets the default value of the given feature, overriding any previous default.
feature: the name of the feature
value: the default value to assign
def set_default (feature, value):
""" Sets the default value of the given feature, overriding any previous default.
feature: the name of the feat... |
Returns the default property values for the given features.
def defaults(features):
""" Returns the default property values for the given features.
"""
assert is_iterable_typed(features, Feature)
# FIXME: should merge feature and property modules.
from . import property
result = []
for f i... |
Returns true iff all elements of names are valid features.
def valid (names):
""" Returns true iff all elements of names are valid features.
"""
if isinstance(names, str):
names = [names]
assert is_iterable_typed(names, basestring)
return all(name in __all_features for name in names) |
Return the values of the given feature.
def values (feature):
""" Return the values of the given feature.
"""
assert isinstance(feature, basestring)
validate_feature (feature)
return __all_features[feature].values |
Returns true iff 'value_string' is a value_string
of an implicit feature.
def is_implicit_value (value_string):
""" Returns true iff 'value_string' is a value_string
of an implicit feature.
"""
assert isinstance(value_string, basestring)
if value_string in __implicit_features:
return __... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.