text stringlengths 81 112k |
|---|
Applies a function to a list of remote partitions.
Note: The main use for this is to preprocess the func.
Args:
func: The func to apply
partitions: The list of partitions
Returns:
A list of BaseFramePartition objects.
def _apply_func_to_list_of_partitions(... |
Applies a function to select indices.
Note: Your internal function must take a kwarg `internal_indices` for
this to work correctly. This prevents information leakage of the
internal index to the external representation.
Args:
axis: The axis to apply the func over.
... |
Applies a function to a select subset of full columns/rows.
Note: This should be used when you need to apply a function that relies
on some global information for the entire column/row, but only need
to apply a function to a subset.
Important: For your func to operate directly ... |
Apply a function to along both axis
Important: For your func to operate directly on the indices provided,
it must use `row_internal_indices, col_internal_indices` as keyword
arguments.
def apply_func_to_indices_both_axis(
self,
func,
row_indices,
col_ind... |
Apply a function that requires two BaseFrameManager objects.
Args:
axis: The axis to apply the function over (0 - rows, 1 - columns)
func: The function to apply
other: The other BaseFrameManager object to apply func to.
Returns:
A new BaseFrameManager ob... |
Shuffle the partitions based on the `shuffle_func`.
Args:
axis: The axis to shuffle across.
shuffle_func: The function to apply before splitting the result.
lengths: The length of each partition to split the result into.
Returns:
A new BaseFrameManager ... |
Load a parquet object from the file path, returning a DataFrame.
Args:
path: The filepath of the parquet file.
We only support local files for now.
engine: This argument doesn't do anything for now.
kwargs: Pass into parquet's read_pandas function.
def read_parquet(path, engi... |
Creates a parser function from the given sep.
Args:
sep: The separator default to use for the parser.
Returns:
A function object.
def _make_parser_func(sep):
"""Creates a parser function from the given sep.
Args:
sep: The separator default to use for the parser.
Returns:... |
Read csv file from local disk.
Args:
filepath_or_buffer:
The filepath of the csv file.
We only support local files for now.
kwargs: Keyword arguments in pandas.read_csv
def _read(**kwargs):
"""Read csv file from local disk.
Args:
filepath_or_buffer:
... |
Read SQL query or database table into a DataFrame.
Args:
sql: string or SQLAlchemy Selectable (select or text object) SQL query to be executed or a table name.
con: SQLAlchemy connectable (engine/connection) or database string URI or DBAPI2 connection (fallback mode)
index_col: Column(s) to... |
Load a parquet object from the file path, returning a DataFrame.
Ray DataFrame only supports pyarrow engine for now.
Args:
path: The filepath of the parquet file.
We only support local files for now.
engine: Ray only support pyarrow reader.
... |
Read csv file from local disk.
Args:
filepath_or_buffer:
The filepath of the csv file.
We only support local files for now.
kwargs: Keyword arguments in pandas.read_csv
def _read(cls, **kwargs):
"""Read csv file from local disk.
Args:
... |
Make a feature mask of categorical features in X.
Features with less than 10 unique values are considered categorical.
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
threshold : int
Maximum number of unique values... |
Split X into selected features and other features
def _X_selected(X, selected):
"""Split X into selected features and other features"""
n_features = X.shape[1]
ind = np.arange(n_features)
sel = np.zeros(n_features, dtype=bool)
sel[np.asarray(selected)] = True
non_sel = np.logical_not(sel)
n... |
Apply a transform function to portion of selected features.
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
transform : callable
A callable transform(X) -> X_transformed
copy : boolean, optional
Copy X even... |
Adjust all values in X to encode for NaNs and infinities in the data.
Parameters
----------
X : array-like, shape=(n_samples, n_feature)
Input array of type int.
Returns
-------
X : array-like, shape=(n_samples, n_feature)
Input array without any... |
Assume X contains only categorical features.
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
def _fit_transform(self, X):
"""Assume X contains only categorical features.
Parameters
---------... |
Fit OneHotEncoder to X, then transform X.
Equivalent to self.fit(X).transform(X), but more convenient and more
efficient. See fit for the parameters, transform for the return value.
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
... |
Asssume X contains only categorical features.
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
def _transform(self, X):
"""Asssume X contains only categorical features.
Parameters
----------
... |
Transform X using one-hot encoding.
Parameters
----------
X : array-like or sparse matrix, shape=(n_samples, n_features)
Dense array or sparse matrix.
Returns
-------
X_out : sparse matrix if sparse=True else a 2-d array, dtype=int
Transformed in... |
Fit an optimized machine learning pipeline.
Uses genetic programming to optimize a machine learning pipeline that
maximizes score on the provided features and target. Performs internal
k-fold cross-validaton to avoid overfitting on the provided data. The
best pipeline is then trained on... |
Setup Memory object for memory caching.
def _setup_memory(self):
"""Setup Memory object for memory caching.
"""
if self.memory:
if isinstance(self.memory, str):
if self.memory == "auto":
# Create a temporary folder to store the transformers of the... |
Helper function to update the _optimized_pipeline field.
def _update_top_pipeline(self):
"""Helper function to update the _optimized_pipeline field."""
# Store the pipeline with the highest internal testing score
if self._pareto_front:
self._optimized_pipeline_score = -float('inf')
... |
Print out best pipeline at the end of optimization process.
Parameters
----------
features: array-like {n_samples, n_features}
Feature matrix
target: array-like {n_samples}
List of class labels for prediction
Returns
-------
self: object... |
Use the optimized pipeline to predict the target for a feature set.
Parameters
----------
features: array-like {n_samples, n_features}
Feature matrix
Returns
----------
array-like: {n_samples}
Predicted target for the samples in the feature matri... |
Call fit and predict in sequence.
Parameters
----------
features: array-like {n_samples, n_features}
Feature matrix
target: array-like {n_samples}
List of class labels for prediction
sample_weight: array-like {n_samples}, optional
Per-sample w... |
Return the score on the given testing data using the user-specified scoring function.
Parameters
----------
testing_features: array-like {n_samples, n_features}
Feature matrix of the testing set
testing_target: array-like {n_samples}
List of class labels for pred... |
Use the optimized pipeline to estimate the class probabilities for a feature set.
Parameters
----------
features: array-like {n_samples, n_features}
Feature matrix of the testing set
Returns
-------
array-like: {n_samples, n_target}
The class pro... |
Provide a string of the individual without the parameter prefixes.
Parameters
----------
individual: individual
Individual which should be represented by a pretty string
Returns
-------
A string like str(individual), but with parameter prefixes removed.
def... |
If enough time has passed, save a new optimized pipeline. Currently used in the per generation hook in the optimization loop.
Parameters
----------
gen: int
Generation number
Returns
-------
None
def _check_periodic_pipeline(self, gen):
"""If enough ... |
Export the optimized pipeline as Python code.
Parameters
----------
output_file_name: string
String containing the path and file name of the desired output file
data_file_path: string (default: '')
By default, the path of input dataset is 'PATH/TO/DATA/FILE' by d... |
Impute missing values in a feature set.
Parameters
----------
features: array-like {n_samples, n_features}
A feature matrix
Returns
-------
array-like {n_samples, n_features}
def _impute_values(self, features):
"""Impute missing values in a feature ... |
Check if a dataset has a valid feature set and labels.
Parameters
----------
features: array-like {n_samples, n_features}
Feature matrix
target: array-like {n_samples} or None
List of class labels for prediction
sample_weight: array-like {n_samples} (opti... |
Compile a DEAP pipeline into a sklearn pipeline.
Parameters
----------
expr: DEAP individual
The DEAP pipeline to be compiled
Returns
-------
sklearn_pipeline: sklearn.pipeline.Pipeline
def _compile_to_sklearn(self, expr):
"""Compile a DEAP pipeline... |
Recursively iterate through all objects in the pipeline and set a given parameter.
Parameters
----------
pipeline_steps: array-like
List of (str, obj) tuples from a scikit-learn pipeline or related object
parameter: str
The parameter to assign a value for in each... |
Stop optimization process once maximum minutes have elapsed.
def _stop_by_max_time_mins(self):
"""Stop optimization process once maximum minutes have elapsed."""
if self.max_time_mins:
total_mins_elapsed = (datetime.now() - self._start_datetime).total_seconds() / 60.
if total_mi... |
Combine the stats with operator count and cv score and preprare to be written to _evaluated_individuals
Parameters
----------
operator_count: int
number of components in the pipeline
cv_score: float
internal cross validation score
individual_stats: dictio... |
Determine the fit of the provided individuals.
Parameters
----------
population: a list of DEAP individual
One individual is a list of pipeline operators and model parameters that can be
compiled by DEAP into a callable function
features: numpy.ndarray {n_samples... |
Preprocess DEAP individuals before pipeline evaluation.
Parameters
----------
individuals: a list of DEAP individual
One individual is a list of pipeline operators and model parameters that can be
compiled by DEAP into a callable function
Returns
-------... |
Update self.evaluated_individuals_ and error message during pipeline evaluation.
Parameters
----------
result_score_list: list
A list of CV scores for evaluated pipelines
eval_individuals_str: list
A list of strings for evaluated pipelines
operator_counts... |
Update self._pbar and error message during pipeline evaluation.
Parameters
----------
pbar_num: int
How many pipelines has been processed
pbar_msg: None or string
Error message
Returns
-------
None
def _update_pbar(self, pbar_num=1, pbar... |
Perform a replacement, insertion, or shrink mutation on an individual.
Parameters
----------
individual: DEAP individual
A list of pipeline operators and model parameters that can be
compiled by DEAP into a callable function
allow_shrink: bool (True)
... |
Generate an expression where each leaf might have a different depth between min_ and max_.
Parameters
----------
pset: PrimitiveSetTyped
Primitive set from which primitives are selected.
min_: int
Minimum height of the produced trees.
max_: int
... |
Count the number of pipeline operators as a measure of pipeline complexity.
Parameters
----------
individual: list
A grown tree with leaves at possibly different depths
dependending on the condition function.
Returns
-------
operator_count: int
... |
Update values in the list of result scores and self._pbar during pipeline evaluation.
Parameters
----------
val: float or "Timeout"
CV scores
result_score_list: list
A list of CV scores
Returns
-------
result_score_list: list
... |
Generate a Tree as a list of lists.
The tree is build from the root to the leaves, and it stop growing when
the condition is fulfilled.
Parameters
----------
pset: PrimitiveSetTyped
Primitive set from which primitives are selected.
min_: int
Mini... |
Select categorical features and transform them using OneHotEncoder.
Parameters
----------
X: numpy ndarray, {n_samples, n_components}
New data, where n_samples is the number of samples and n_components is the number of components.
Returns
-------
array-like,... |
Select continuous features and transform them using PCA.
Parameters
----------
X: numpy ndarray, {n_samples, n_components}
New data, where n_samples is the number of samples and n_components is the number of components.
Returns
-------
array-like, {n_samples... |
Fit the StackingEstimator meta-transformer.
Parameters
----------
X: array-like of shape (n_samples, n_features)
The training input samples.
y: array-like, shape (n_samples,)
The target values (integers that correspond to classes in classification, real numbers i... |
Transform data by adding two synthetic feature(s).
Parameters
----------
X: numpy ndarray, {n_samples, n_components}
New data, where n_samples is the number of samples and n_components is the number of components.
Returns
-------
X_transformed: array-like, s... |
Default scoring function: balanced accuracy.
Balanced accuracy computes each class' accuracy on a per-class basis using a
one-vs-rest encoding, then computes an unweighted average of the class accuracies.
Parameters
----------
y_true: numpy.ndarray {n_samples}
True class labels
y_pred:... |
Transform data by adding two virtual features.
Parameters
----------
X: numpy ndarray, {n_samples, n_components}
New data, where n_samples is the number of samples and n_components
is the number of components.
y: None
Unused
Returns
-... |
Decode operator source and import operator class.
Parameters
----------
sourcecode: string
a string of operator source (e.g 'sklearn.feature_selection.RFE')
verbose: int, optional (default: 0)
How much information TPOT communicates while it's running.
0 = none, 1 = minimal, 2 = ... |
Recursively iterates through all objects in the pipeline and sets sample weight.
Parameters
----------
pipeline_steps: array-like
List of (str, obj) tuples from a scikit-learn pipeline or related object
sample_weight: array-like
List of sample weight
Returns
-------
sample_w... |
Dynamically create operator class.
Parameters
----------
opsourse: string
operator source in config dictionary (key)
opdict: dictionary
operator params in config dictionary (value)
regression: bool
True if it can be used in TPOTRegressor
classification: bool
True... |
Ensure that the provided value is a positive integer.
Parameters
----------
value: int
The number to evaluate
Returns
-------
value: int
Returns a positive integer
def positive_integer(value):
"""Ensure that the provided value is a positive integer.
Parameters
---... |
Ensure that the provided value is a float integer in the range [0., 1.].
Parameters
----------
value: float
The number to evaluate
Returns
-------
value: float
Returns a float in the range (0., 1.)
def float_range(value):
"""Ensure that the provided value is a float intege... |
Main function that is called when TPOT is run on the command line.
def _get_arg_parser():
"""Main function that is called when TPOT is run on the command line."""
parser = argparse.ArgumentParser(
description=(
'A Python tool that automatically creates and optimizes machine '
'l... |
converts mymodule.myfunc in the myfunc
object itself so tpot receives a scoring function
def load_scoring_function(scoring_func):
"""
converts mymodule.myfunc in the myfunc
object itself so tpot receives a scoring function
"""
if scoring_func and ("." in scoring_func):
try:
... |
Perform a TPOT run.
def tpot_driver(args):
"""Perform a TPOT run."""
if args.VERBOSITY >= 2:
_print_args(args)
input_data = _read_data_file(args)
features = input_data.drop(args.TARGET_NAME, axis=1)
training_features, testing_features, training_target, testing_target = \
train_tes... |
Fit FeatureSetSelector for feature selection
Parameters
----------
X: array-like of shape (n_samples, n_features)
The training input samples.
y: array-like, shape (n_samples,)
The target values (integers that correspond to classes in classification, real numbers ... |
Make subset after fit
Parameters
----------
X: numpy ndarray, {n_samples, n_features}
New data, where n_samples is the number of samples and n_features is the number of features.
Returns
-------
X_transformed: array-like, shape (n_samples, n_features + 1) or... |
Get the boolean mask indicating which features are selected
Returns
-------
support : boolean array of shape [# input features]
An element is True iff its corresponding feature is selected for
retention.
def _get_support_mask(self):
"""
Get the boolean ma... |
Pick two individuals from the population which can do crossover, that is, they share a primitive.
Parameters
----------
population: array of individuals
Returns
----------
tuple: (individual, individual)
Two individuals which are not the same, but share at least one primitive.
... |
Picks a random individual from the population, and performs mutation on a copy of it.
Parameters
----------
population: array of individuals
Returns
----------
individual: individual
An individual which is a mutated copy of one of the individuals in population,
the returned ind... |
Part of an evolutionary algorithm applying only the variation part
(crossover, mutation **or** reproduction). The modified individuals have
their fitness invalidated. The individuals are cloned so returned
population is independent of the input population.
:param population: A list of individuals to var... |
Initializes the stats dict for individual
The statistics initialized are:
'generation': generation in which the individual was evaluated. Initialized as: 0
'mutation_count': number of mutation operations applied to the individual and its predecessor cumulatively. Initialized as: 0
'crossover... |
This is the :math:`(\mu + \lambda)` evolutionary algorithm.
:param population: A list of individuals.
:param toolbox: A :class:`~deap.base.Toolbox` that contains the evolution
operators.
:param mu: The number of individuals to select for the next generation.
:param lambda\_: The numb... |
Randomly select in each individual and exchange each subtree with the
point as root between each individual.
:param ind1: First tree participating in the crossover.
:param ind2: Second tree participating in the crossover.
:returns: A tuple of two trees.
def cxOnePoint(ind1, ind2):
"""Randomly selec... |
Replaces a randomly chosen primitive from *individual* by a randomly
chosen primitive no matter if it has the same number of arguments from the :attr:`pset`
attribute of the individual.
Parameters
----------
individual: DEAP individual
A list of pipeline operators and model parameters that c... |
Fit estimator and compute scores for a given dataset split.
Parameters
----------
sklearn_pipeline : pipeline object implementing 'fit'
The object to use to fit the data.
features : array-like of shape at least 2D
The data to fit.
target : array-like, optional, default: None
... |
Return operator class instance by name.
Parameters
----------
opname: str
Name of the sklearn class that belongs to a TPOT operator
operators: list
List of operator classes from operator library
Returns
-------
ret_op_class: class
An operator class
def get_by_name(... |
Generate source code for a TPOT Pipeline.
Parameters
----------
exported_pipeline: deap.creator.Individual
The pipeline that is being exported
operators:
List of operator classes from operator library
pipeline_score:
Optional pipeline score to be saved to the exported file
... |
Convert the unstructured DEAP pipeline into a tree data-structure.
Parameters
----------
ind: deap.creator.Individual
The pipeline that is being exported
Returns
-------
pipeline_tree: list
List of operators in the current optimized pipeline
EXAMPLE:
pipeline:
... |
Generate all library import calls for use in TPOT.export().
Parameters
----------
pipeline: List
List of operators in the current optimized pipeline
operators:
List of operator class from operator library
impute : bool
Whether to impute new values in the feature set.
Re... |
Generate code specific to the construction of the sklearn Pipeline.
Parameters
----------
pipeline_tree: list
List of operators in the current optimized pipeline
Returns
-------
Source code for the sklearn pipeline
def generate_pipeline_code(pipeline_tree, operators):
"""Generate ... |
Generate code specific to the construction of the sklearn Pipeline for export_pipeline.
Parameters
----------
pipeline_tree: list
List of operators in the current optimized pipeline
Returns
-------
Source code for the sklearn pipeline
def generate_export_pipeline_code(pipeline_tree, o... |
Indent a multiline string by some number of spaces.
Parameters
----------
text: str
The text to be indented
amount: int
The number of spaces to indent the text
Returns
-------
indented_text
def _indent(text, amount):
"""Indent a multiline string by some number of space... |
Get the next value in the page.
def next(self):
"""Get the next value in the page."""
item = six.next(self._item_iter)
result = self._item_to_value(self._parent, item)
# Since we've successfully got the next value from the
# iterator, we update the number of remaining.
s... |
Verifies the parameters don't use any reserved parameter.
Raises:
ValueError: If a reserved parameter is used.
def _verify_params(self):
"""Verifies the parameters don't use any reserved parameter.
Raises:
ValueError: If a reserved parameter is used.
"""
... |
Get the next page in the iterator.
Returns:
Optional[Page]: The next page in the iterator or :data:`None` if
there are no pages left.
def _next_page(self):
"""Get the next page in the iterator.
Returns:
Optional[Page]: The next page in the iterator or :... |
Getter for query parameters for the next request.
Returns:
dict: A dictionary of query parameters.
def _get_query_params(self):
"""Getter for query parameters for the next request.
Returns:
dict: A dictionary of query parameters.
"""
result = {}
... |
Requests the next page from the path provided.
Returns:
dict: The parsed JSON response of the next page's contents.
Raises:
ValueError: If the HTTP method is not ``GET`` or ``POST``.
def _get_next_page_response(self):
"""Requests the next page from the path provided.
... |
Get the next page in the iterator.
Wraps the response from the :class:`~google.gax.PageIterator` in a
:class:`Page` instance and captures some state at each page.
Returns:
Optional[Page]: The next page in the iterator or :data:`None` if
there are no pages left.
d... |
Get the next page in the iterator.
Returns:
Page: The next page in the iterator or :data:`None` if
there are no pages left.
def _next_page(self):
"""Get the next page in the iterator.
Returns:
Page: The next page in the iterator or :data:`None` if
... |
Determines whether or not there are more pages with results.
Returns:
bool: Whether the iterator has more pages.
def _has_next_page(self):
"""Determines whether or not there are more pages with results.
Returns:
bool: Whether the iterator has more pages.
"""
... |
Main comparison function for all Firestore types.
@return -1 is left < right, 0 if left == right, otherwise 1
def compare(cls, left, right):
"""
Main comparison function for all Firestore types.
@return -1 is left < right, 0 if left == right, otherwise 1
"""
# First comp... |
Service that performs image detection and annotation for a batch of files.
Now only "application/pdf", "image/tiff" and "image/gif" are supported.
This service will extract at most the first 10 frames (gif) or pages
(pdf or tiff) from each file provided and perform detection and annotation
... |
Run asynchronous image detection and annotation for a list of images.
Progress and results can be retrieved through the
``google.longrunning.Operations`` interface. ``Operation.metadata``
contains ``OperationMetadata`` (metadata). ``Operation.response``
contains ``AsyncBatchAnnotateImag... |
Run asynchronous image detection and annotation for a list of generic
files, such as PDF files, which may contain multiple pages and multiple
images per page. Progress and results can be retrieved through the
``google.longrunning.Operations`` interface. ``Operation.metadata``
contains ``... |
Called by IPython when this module is loaded as an IPython extension.
def load_ipython_extension(ipython):
"""Called by IPython when this module is loaded as an IPython extension."""
from google.cloud.bigquery.magics import _cell_magic
ipython.register_magic_function(
_cell_magic, magic_kind="cell... |
Create a :class:`GoogleAPICallError` from an HTTP status code.
Args:
status_code (int): The HTTP status code.
message (str): The exception message.
kwargs: Additional arguments passed to the :class:`GoogleAPICallError`
constructor.
Returns:
GoogleAPICallError: An in... |
Create a :class:`GoogleAPICallError` from a :class:`requests.Response`.
Args:
response (requests.Response): The HTTP response.
Returns:
GoogleAPICallError: An instance of the appropriate subclass of
:class:`GoogleAPICallError`, with the message and errors populated
from... |
Create a :class:`GoogleAPICallError` from a :class:`grpc.StatusCode`.
Args:
status_code (grpc.StatusCode): The gRPC status code.
message (str): The exception message.
kwargs: Additional arguments passed to the :class:`GoogleAPICallError`
constructor.
Returns:
Google... |
Create a :class:`GoogleAPICallError` from a :class:`grpc.RpcError`.
Args:
rpc_exc (grpc.RpcError): The gRPC error.
Returns:
GoogleAPICallError: An instance of the appropriate subclass of
:class:`GoogleAPICallError`.
def from_grpc_error(rpc_exc):
"""Create a :class:`GoogleAPICa... |
Make a request over the Http transport to the Cloud Datastore API.
:type http: :class:`requests.Session`
:param http: HTTP object to make requests.
:type project: str
:param project: The project to make the request for.
:type method: str
:param method: The API call method name (ie, ``runQuery... |
Make a protobuf RPC request.
:type http: :class:`requests.Session`
:param http: HTTP object to make requests.
:type project: str
:param project: The project to connect to. This is
usually your project name in the cloud console.
:type method: str
:param method: The name of ... |
Construct the URL for a particular API call.
This method is used internally to come up with the URL to use when
making RPCs to the Cloud Datastore API.
:type project: str
:param project: The project to connect to. This is
usually your project name in the cloud console.
:type m... |
Perform a ``lookup`` request.
:type project_id: str
:param project_id: The project to connect to. This is
usually your project name in the cloud console.
:type keys: List[.entity_pb2.Key]
:param keys: The keys to retrieve from the datastore.
:type re... |
Perform a ``runQuery`` request.
:type project_id: str
:param project_id: The project to connect to. This is
usually your project name in the cloud console.
:type partition_id: :class:`.entity_pb2.PartitionId`
:param partition_id: Partition ID corresponding to... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.