text stringlengths 81 112k |
|---|
Train a TF Eager model
:param model: cleverhans.model.Model
:param X_train: numpy array with training inputs
:param Y_train: numpy array with training outputs
:param save: boolean controlling the save operation
:param predictions_adv: if set with the adversarial example tensor,
will ... |
Compute the accuracy of a TF Eager model on some data
:param model: instance of cleverhans.model.Model_Eager
with pretrained weights for evaluation.
:param X_test: numpy array with training inputs
:param Y_test: numpy array with training outputs
:param args: dict or argparse `Namespace` object... |
Helper function that computes the current class prediction
:param samples: numpy array with input samples (dims must match x)
:return: the argmax output of predictions, i.e. the current predicted class
def model_argmax(model, samples):
"""
Helper function that computes the current class prediction
:param sam... |
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Generate symbolic graph for adversarial examples and return.
:param x: The model's symbolic inputs.
:param kwargs: See `pa... |
Function to initialize the dual variables of the class.
Args:
neural_net_params_object: Object with the neural net weights, biases
and types
init_dual_file: Path to file containing dual variables, if the path
is empty, perform random initialization
Expects numpy dictionary with
lambda... |
Function that performs one step of gd (variant) for min eigen value.
Args:
current_vector: current estimate of the eigen vector with minimum eigen
value.
learning_rate: learning rate.
vector_prod_fn: function which returns product H*x, where H is a matrix for
which we computing eigenvector.
... |
Computes eigenvector which corresponds to minimum eigenvalue.
Args:
x: initial value of eigenvector.
num_steps: number of optimization steps.
learning_rate: learning rate.
vector_prod_fn: function which takes x and returns product H*x.
Returns:
approximate value of eigenvector.
This functio... |
Computes smallest eigenvector and eigenvalue using Lanczos in pure TF.
This function computes smallest eigenvector and eigenvalue of the matrix
which is implicitly specified by `vector_prod_fn`.
`vector_prod_fn` is a function which takes `x` and returns a product of matrix
in consideration and `x`.
Computati... |
Provides access to the model's Variables.
This may include Variables that are not parameters, such as batch
norm running moments.
:return: A list of all Variables defining the model.
def get_vars(self):
"""
Provides access to the model's Variables.
This may include Variables that are not parame... |
Forward propagation as either no-op or dropping random units.
:param x: The input to the layer
:param dropout: bool specifying whether to drop units
:param dropout_dict: dict
This dictionary is usually not needed.
In rare cases, generally for research purposes, this dictionary
makes ... |
Return a tensor that constructs adversarial examples for the given
input. Generate uses tf.py_func in order to operate over tensors.
:param x: A tensor with the inputs.
:param kwargs: See `parse_params`
def generate(self, x, **kwargs):
"""
Return a tensor that constructs adversarial examples for t... |
:param y: (optional) A tensor with the true labels for an untargeted
attack. If None (and y_target is None) then use the
original labels the classifier assigns.
:param y_target: (optional) A tensor with the target labels for a
targeted attack.
:param confidence: Confide... |
Perform the L_2 attack on the given instance for the given targets.
If self.targeted is true, then the targets represents the target labels
If self.targeted is false, then targets are the original class labels
def attack(self, imgs, targets):
"""
Perform the L_2 attack on the given instance for the gi... |
Run the attack on a batch of instance and labels.
def attack_batch(self, imgs, labs):
"""
Run the attack on a batch of instance and labels.
"""
def compare(x, y):
if not isinstance(x, (float, int, np.int64)):
x = np.copy(x)
if self.TARGETED:
x[y] -= self.CONFIDENCE
... |
Load model if present at the specified path.
def maybe_load_model(savedir, container):
"""Load model if present at the specified path."""
if savedir is None:
return
state_path = os.path.join(os.path.join(savedir, 'training_state.pkl.zip'))
if container is not None:
logger.log("Attempting to download m... |
Warn user if running cleverhans from a different directory than tutorial.
def check_installation(cur_file):
"""Warn user if running cleverhans from a different directory than tutorial."""
cur_dir = os.path.split(os.path.dirname(os.path.abspath(cur_file)))[0]
ch_dir = os.path.split(cleverhans.__path__[0])[0]
if... |
Parses command line arguments.
def parse_args():
"""Parses command line arguments."""
parser = argparse.ArgumentParser(
description='Tool to download dataset images.')
parser.add_argument('--input_file', required=True,
help='Location of dataset.csv')
parser.add_argument('--output_di... |
Downloads the image that corresponds to the given row.
Prints a notification if the download fails.
def get_image(row, output_dir):
"""Downloads the image that corresponds to the given row.
Prints a notification if the download fails."""
if not download_image(image_id=row[0],
url=row[1]... |
Downloads one image, crops it, resizes it and saves it locally.
def download_image(image_id, url, x1, y1, x2, y2, output_dir):
"""Downloads one image, crops it, resizes it and saves it locally."""
output_filename = os.path.join(output_dir, image_id + '.png')
if os.path.exists(output_filename):
# Don't downlo... |
Custom py_func with gradient support
def py_func_grad(func, inp, Tout, stateful=True, name=None, grad=None):
"""Custom py_func with gradient support
"""
# Need to generate a unique name to avoid duplicates:
rnd_name = 'PyFuncGrad' + str(np.random.randint(0, 1E+8))
tf.RegisterGradient(rnd_name)(grad)
g = ... |
Forward propagation throught the network
:return: dictionary with layer names mapping to activation values.
def fprop(self, x):
"""
Forward propagation throught the network
:return: dictionary with layer names mapping to activation values.
"""
# Feed forward through the network layers
for ... |
Provides access to the parameters of the given layer.
Works arounds the non-availability of graph collections in
eager mode.
:layer_name: name of the layer for which parameters are
required, must be one of the string in the
list layer_names
:return: list of pa... |
Provides access to the model's parameters.
Works arounds the non-availability of graph collections in
eager mode.
:return: A list of all Variables defining the model parameters.
def get_params(self):
"""
Provides access to the model's parameters.
Works arounds the non-availabili... |
This function displays two images: the original and the adversarial sample
:param original: the original input
:param adversarial: the input after perturbations have been applied
:param figure: if we've already displayed images, use the same plot
:return: the matplot figure to reuse for future samples
def pair... |
This function displays a grid of images to show full misclassification
:param data: grid data of the form;
[nb_classes : nb_classes : img_rows : img_cols : nb_channels]
:return: if necessary, the matplot figure to reuse
def grid_visual(data):
"""
This function displays a grid of images to show full miscl... |
Get logits when the input is perturbed in an interval in adv direction.
Args:
sess: Tf session
model: Model for which we wish to get logits.
x_data: Numpy array corresponding to single data.
point of shape [height, width, channels].
fgsm_params: Parameters for generating adversa... |
Generate linear extrapolation plot.
Args:
log_prob_adv_array: Numpy array containing log probabilities
y: Tf placeholder for the labels
file_name: Plot filename
min_epsilon: Minimum value of epsilon over the interval
max_epsilon: Maximum value of epsilon over the interval
num_poin... |
Encode IQFeed API messages.
def _send_cmd(self, cmd: str):
"""Encode IQFeed API messages."""
self._sock.sendall(cmd.encode(encoding='latin-1', errors='strict')) |
Send data query to IQFeed API.
def iq_query(self, message: str):
"""Send data query to IQFeed API."""
end_msg = '!ENDMSG!'
recv_buffer = 4096
# Send the historical data request message and buffer the data
self._send_cmd(message)
chunk = ""
data = ""
whi... |
Request historical 5 minute data from DTN.
def get_historical_minute_data(self, ticker: str):
"""Request historical 5 minute data from DTN."""
start = self._start
stop = self._stop
if len(stop) > 4:
stop = stop[:4]
if len(start) > 4:
start = start[:4]
... |
Build Pandas Dataframe in memory
def add_data_to_df(self, data: np.array):
"""Build Pandas Dataframe in memory"""
col_names = ['high_p', 'low_p', 'open_p', 'close_p', 'volume', 'oi']
data = np.array(data).reshape(-1, len(col_names) + 1)
df = pd.DataFrame(data=data[:, 1:], index=data[:... |
Load ticker list from txt file
def get_tickers_from_file(self, filename):
"""Load ticker list from txt file"""
if not os.path.exists(filename):
log.error("Ticker List file does not exist: %s", filename)
tickers = []
with io.open(filename, 'r') as fd:
for ticker ... |
Write Pandas Dataframe to InfluxDB database
def write_dataframe_to_idb(self, ticker):
"""Write Pandas Dataframe to InfluxDB database"""
cachepath = self._cache
cachefile = ('%s/%s-1M.csv.gz' % (cachepath, ticker))
if not os.path.exists(cachefile):
log.warn('Import file does... |
connect events
def connect(self):
"""connect events"""
self._cidmotion = self.canvas.mpl_connect('motion_notify_event',
self.onmove)
self._ciddraw = self.canvas.mpl_connect('draw_event', self.clear) |
disconnect events
def disconnect(self):
"""disconnect events"""
self.canvas.mpl_disconnect(self._cidmotion)
self.canvas.mpl_disconnect(self._ciddraw) |
Connect to window and set it foreground
Args:
**kwargs: optional arguments
Returns:
None
def connect(self, **kwargs):
"""
Connect to window and set it foreground
Args:
**kwargs: optional arguments
Returns:
None
... |
Get rectangle of app or desktop resolution
Returns:
RECT(left, top, right, bottom)
def get_rect(self):
"""
Get rectangle of app or desktop resolution
Returns:
RECT(left, top, right, bottom)
"""
if self.handle:
left, top, right, bott... |
Take a screenshot and save it to `tmp.png` filename by default
Args:
filename: name of file where to store the screenshot
Returns:
display the screenshot
def snapshot(self, filename="tmp.png"):
"""
Take a screenshot and save it to `tmp.png` filename by default
... |
从数据中获取到绘图相关的有用信息.
def extract_data(self):
"""从数据中获取到绘图相关的有用信息."""
self.time_axis = []
self.cpu_axis = []
self.mem_axis = []
self.timestamp_list = []
plot_data = self.data.get("plot_data", [])
# 按照时间分割线,划分成几段数据,取其中的最值
for i in plot_data:
timest... |
获取每个方法中的cpu和内存耗费最值点.
def get_each_method_maximun_cpu_mem(self):
"""获取每个方法中的cpu和内存耗费最值点."""
# 本函数用于丰富self.method_exec_info的信息:存入cpu、mem最值点
self.method_exec_info = deepcopy(self.data.get("method_exec_info", []))
method_exec_info = deepcopy(self.method_exec_info) # 用来辅助循环
method_i... |
获取图像的title.
def _get_graph_title(self):
"""获取图像的title."""
start_time = datetime.fromtimestamp(int(self.timestamp_list[0]))
end_time = datetime.fromtimestamp(int(self.timestamp_list[-1]))
end_time = end_time.strftime('%H:%M:%S')
title = "Timespan: %s —— %s" % (start_time, end_tim... |
绘制CPU/Mem/特征点数量.
def plot_cpu_mem_keypoints(self):
"""绘制CPU/Mem/特征点数量."""
plt.figure(1)
# 开始绘制子图:
plt.subplot(311)
title = self._get_graph_title()
plt.title(title, loc="center") # 设置绘图的标题
mem_ins = plt.plot(self.time_axis, self.mem_axis, "-", label="Mem(MB)", co... |
初始化方法对象.
def refresh_method_objects(self):
"""初始化方法对象."""
self.method_object_dict = {}
for key, method in self.MATCHING_METHODS.items():
method_object = method(self.im_search, self.im_source, self.threshold, self.rgb)
self.method_object_dict.update({key: method_object}) |
获取特征点.
def _get_result(self, method_name="kaze"):
"""获取特征点."""
method_object = self.method_object_dict.get(method_name)
# 提取结果和特征点:
try:
result = method_object.find_best_result()
except Exception:
import traceback
traceback.print_exc()
... |
获取并且绘制出特征点匹配结果.
def get_and_plot_keypoints(self, method_name, plot=False):
"""获取并且绘制出特征点匹配结果."""
if method_name not in self.method_object_dict.keys():
print("'%s' is not in MATCHING_METHODS" % method_name)
return None
kp_sch, kp_src, good, result = self._get_result(metho... |
开始线程.
def run(self):
"""开始线程."""
while not self.stop_flag:
timestamp = time.time()
cpu_percent = self.process.cpu_percent() / self.cpu_num
# mem_percent = mem = self.process.memory_percent()
mem_info = dict(self.process.memory_info()._asdict())
... |
加载待匹配图片.
def load_images(self, search_file, source_file):
"""加载待匹配图片."""
self.search_file, self.source_file = search_file, source_file
self.im_search, self.im_source = imread(self.search_file), imread(self.source_file)
# 初始化对象
self.check_macthing_object = CheckKeypointResult(sel... |
帮助函数执行时记录数据.
def profile_methods(self, method_list):
"""帮助函数执行时记录数据."""
self.method_exec_info = []
# 开始数据记录进程
self.record_thread.stop_flag = False
self.record_thread.start()
for name in method_list:
if name not in self.check_macthing_object.MATCHING_METHODS.... |
将性能数据写入文件.
def wite_to_json(self, dir_path="", file_name=""):
"""将性能数据写入文件."""
# 提取数据
data = {
"plot_data": self.record_thread.profile_data,
"method_exec_info": self.method_exec_info,
"search_file": self.search_file,
"source_file": self.source_fil... |
处理poco的相关操作,参数与airtest的不同,由一个截图和一个操作构成,需要合成一个步骤
Parameters
----------
step 一个完整的操作,如click
prev_step 前一个步骤,应该是截图
Returns
-------
def translate_poco_step(self, step):
"""
处理poco的相关操作,参数与airtest的不同,由一个截图和一个操作构成,需要合成一个步骤
Parameters
----------... |
把对应的poco操作显示成中文
def func_desc_poco(self, step):
""" 把对应的poco操作显示成中文"""
desc = {
"touch": u"点击UI组件 {name}".format(name=step.get("text", "")),
}
if step['type'] in desc:
return desc.get(step['type'])
else:
return self._translate_desc(step) |
对指定的图片进行性能测试.
def profile_different_methods(search_file, screen_file, method_list, dir_path, file_name):
"""对指定的图片进行性能测试."""
profiler = ProfileRecorder(0.05)
# 加载图片
profiler.load_images(search_file, screen_file)
# 传入待测试的方法列表
profiler.profile_methods(method_list)
# 将性能数据写入文件
profiler.wit... |
绘制多个图片的结果.
def plot_profiled_all_images_table(method_list):
"""绘制多个图片的结果."""
high_dpi_dir_path, high_dpi_file_name = "result", "high_dpi.json"
rich_texture_dir_path, rich_texture_file_name = "result", "rich_texture.json"
text_dir_path, text_file_name = "result", "text.json"
image_list = ['high_dpi... |
获取method对应的color列表.
def get_color_list(method_list):
"""获取method对应的color列表."""
color_list = []
for method in method_list:
color = tuple([random() for _ in range(3)]) # 随机颜色画线
color_list.append(color)
return color_list |
绘制了对比表格.
def plot_compare_table(image_list, method_list, color_list, compare_dict, fig_name="", fig_num=111):
"""绘制了对比表格."""
row_labels = image_list
# 写入值:
table_vals = []
for i in range(len(row_labels)):
row_vals = []
for method in method_list:
row_vals.append(compare_d... |
绘制对比曲线.
def plot_compare_curves(image_list, method_list, color_list, compare_dict, fig_name="", fig_num=111):
"""绘制对比曲线."""
plt.subplot(fig_num)
plt.title(fig_name, loc="center") # 设置绘图的标题
mix_ins = []
for index, method in enumerate(method_list):
mem_ins = plt.plot(image_list, compare_dict... |
Read a tag from the buffer, and return a (tag_bytes, new_pos) tuple.
We return the raw bytes of the tag rather than decoding them. The raw
bytes can then be used to look up the proper decoder. This effectively allows
us to trade some work that would be done in pure-python (decoding a varint)
for work that is... |
Return a constructor for a decoder for fields of a particular type.
Args:
wire_type: The field's wire type.
decode_value: A function which decodes an individual value, e.g.
_DecodeVarint()
def _SimpleDecoder(wire_type, decode_value):
"""Return a constructor for a decoder for fields of a part... |
Like SimpleDecoder but additionally invokes modify_value on every value
before storing it. Usually modify_value is ZigZagDecode.
def _ModifiedDecoder(wire_type, decode_value, modify_value):
"""Like SimpleDecoder but additionally invokes modify_value on every value
before storing it. Usually modify_value is Zig... |
Return a constructor for a decoder for a fixed-width field.
Args:
wire_type: The field's wire type.
format: The format string to pass to struct.unpack().
def _StructPackDecoder(wire_type, format):
"""Return a constructor for a decoder for a fixed-width field.
Args:
wire_type: The field's w... |
Returns a decoder for a float field.
This code works around a bug in struct.unpack for non-finite 32-bit
floating-point values.
def _FloatDecoder():
"""Returns a decoder for a float field.
This code works around a bug in struct.unpack for non-finite 32-bit
floating-point values.
"""
local_unpack = str... |
Returns a decoder for a double field.
This code works around a bug in struct.unpack for not-a-number.
def _DoubleDecoder():
"""Returns a decoder for a double field.
This code works around a bug in struct.unpack for not-a-number.
"""
local_unpack = struct.unpack
def InnerDecode(buffer, pos):
# We ex... |
Returns a decoder for a string field.
def StringDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a string field."""
local_DecodeVarint = _DecodeVarint
local_unicode = six.text_type
def _ConvertToUnicode(byte_str):
try:
return local_unicode(byte_str, 'utf-8')... |
Returns a decoder for a bytes field.
def BytesDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a bytes field."""
local_DecodeVarint = _DecodeVarint
assert not is_packed
if is_repeated:
tag_bytes = encoder.TagBytes(field_number,
wir... |
Returns a decoder for a group field.
def GroupDecoder(field_number, is_repeated, is_packed, key, new_default):
"""Returns a decoder for a group field."""
end_tag_bytes = encoder.TagBytes(field_number,
wire_format.WIRETYPE_END_GROUP)
end_tag_len = len(end_tag_bytes)
assert n... |
Returns a decoder for a map field.
def MapDecoder(field_descriptor, new_default, is_message_map):
"""Returns a decoder for a map field."""
key = field_descriptor
tag_bytes = encoder.TagBytes(field_descriptor.number,
wire_format.WIRETYPE_LENGTH_DELIMITED)
tag_len = len(tag_bytes)... |
Skip a varint value. Returns the new position.
def _SkipVarint(buffer, pos, end):
"""Skip a varint value. Returns the new position."""
# Previously ord(buffer[pos]) raised IndexError when pos is out of range.
# With this code, ord(b'') raises TypeError. Both are handled in
# python_message.py to generate a ... |
Skip a length-delimited value. Returns the new position.
def _SkipLengthDelimited(buffer, pos, end):
"""Skip a length-delimited value. Returns the new position."""
(size, pos) = _DecodeVarint(buffer, pos)
pos += size
if pos > end:
raise _DecodeError('Truncated message.')
return pos |
Skip sub-group. Returns the new position.
def _SkipGroup(buffer, pos, end):
"""Skip sub-group. Returns the new position."""
while 1:
(tag_bytes, pos) = ReadTag(buffer, pos)
new_pos = SkipField(buffer, pos, end, tag_bytes)
if new_pos == -1:
return pos
pos = new_pos |
Constructs the SkipField function.
def _FieldSkipper():
"""Constructs the SkipField function."""
WIRETYPE_TO_SKIPPER = [
_SkipVarint,
_SkipFixed64,
_SkipLengthDelimited,
_SkipGroup,
_EndGroup,
_SkipFixed32,
_RaiseInvalidWireType,
_RaiseInvalidWireType,
]
wi... |
parse dumped node
def _parse_node(graph, text):
"""parse dumped node"""
match = _NODEPAT.match(text)
if match is not None:
node = match.group(1)
graph.node(node, label=match.group(2), shape='circle')
return node
match = _LEAFPAT.match(text)
if match is not None:
node... |
Plot specified tree.
Parameters
----------
booster : Booster, XGBModel
Booster or XGBModel instance
num_trees : int, default 0
Specify the ordinal number of target tree
rankdir : str, default "UT"
Passed to graphiz via graph_attr
ax : matplotlib Axes, default None
... |
Constructs the dependency graph.
properties: the build properties.
targets: the targets to consider. If none is specified, uses all.
def construct (self, properties = [], targets = []):
""" Constructs the dependency graph.
properties: the build properties.
... |
Evaluate the model by making predictions of target values and comparing
these to actual values.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the same
names as the target and features used for model training. Additi... |
A flexible and advanced prediction API.
The target column is provided during
:func:`~turicreate.decision_tree.create`. If the target column is in the
`dataset` it will be ignored.
Parameters
----------
dataset : SFrame
A dataset that has the same columns that ... |
Return top-k predictions for the ``dataset``, using the trained model.
Predictions are returned as an SFrame with three columns: `id`,
`class`, and `probability`, `margin`, or `rank`, depending on the ``output_type``
parameter. Input dataset size must be the same as for training of the model.
... |
Return a classification, for each example in the ``dataset``, using the
trained model. The output SFrame contains predictions as class labels
(0 or 1) and probabilities associated with the the example.
Parameters
----------
dataset : SFrame
Dataset of new observation... |
get enviroment variables for slaves
can be passed in as args or envs
def slave_envs(self):
"""
get enviroment variables for slaves
can be passed in as args or envs
"""
if self.hostIP == 'dns':
host = socket.gethostname()
elif self.hostIP == 'ip':
... |
get a ring structure that tends to share nodes with the tree
return a list starting from r
def find_share_ring(self, tree_map, parent_map, r):
"""
get a ring structure that tends to share nodes with the tree
return a list starting from r
"""
nset = set(tree_map[r])
... |
get a ring connection used to recover local data
def get_ring(self, tree_map, parent_map):
"""
get a ring connection used to recover local data
"""
assert parent_map[0] == -1
rlst = self.find_share_ring(tree_map, parent_map, 0)
assert len(rlst) == len(tree_map)
r... |
get the link map, this is a bit hacky, call for better algorithm
to place similar nodes together
def get_link_map(self, nslave):
"""
get the link map, this is a bit hacky, call for better algorithm
to place similar nodes together
"""
tree_map, parent_map = self.get_tree(... |
Helper rule to generate a faster alternative to MSVC setup scripts.
We used to call MSVC setup scripts directly in every action, however in
newer MSVC versions (10.0+) they make long-lasting registry queries
which have a significant impact on build time.
def maybe_rewrite_setup(toolset, setup_script, setu... |
Automatically create a suitable classifier model based on the provided
training data.
To use specific options of a desired model, use the ``create`` function
of the corresponding model.
Parameters
----------
dataset : SFrame
Dataset for training the model.
target : string
... |
Adds the specified column to this SFrame. The number of elements in
the data given must match every other column of the SFrame.
If inplace == False (default) this operation does not modify the
current SFrame, returning a new SFrame.
If inplace == True, this operation modifies the curr... |
Adds columns to the SFrame. The number of elements in all columns must
match every other column of the SFrame.
If inplace == False (default) this operation does not modify the
current SFrame, returning a new SFrame.
If inplace == True, this operation modifies the current
SFram... |
Removes the column with the given name from the SFrame.
If inplace == False (default) this operation does not modify the
current SFrame, returning a new SFrame.
If inplace == True, this operation modifies the current
SFrame, returning self.
Parameters
----------
... |
Swaps the columns with the given names.
If inplace == False (default) this operation does not modify the
current SFrame, returning a new SFrame.
If inplace == True, this operation modifies the current
SFrame, returning self.
Parameters
----------
column_name_1 ... |
Rename the columns using the 'names' dict. This changes the names of
the columns given as the keys and replaces them with the names given as
the values.
If inplace == False (default) this operation does not modify the
current SFrame, returning a new SFrame.
If inplace == True,... |
Returns the number of rows.
Returns
-------
out : int
Number of rows in the SFrame.
def num_rows(self):
"""
Returns the number of rows.
Returns
-------
out : int
Number of rows in the SFrame.
"""
if self._is_verte... |
Returns the column names.
Returns
-------
out : list[string]
Column names of the SFrame.
def column_names(self):
"""
Returns the column names.
Returns
-------
out : list[string]
Column names of the SFrame.
"""
if ... |
Returns the column types.
Returns
-------
out : list[type]
Column types of the SFrame.
def column_types(self):
"""
Returns the column types.
Returns
-------
out : list[type]
Column types of the SFrame.
"""
if self... |
Automatically create a suitable regression model based on the provided
training data.
To use specific options of a desired model, use the ``create`` function
of the corresponding model.
Parameters
----------
dataset : SFrame
Dataset for training the model.
target : str
The... |
node is removable only if all of its children are as well.
def removable(self, node):
'''
node is removable only if all of its children are as well.
'''
throw_away = []
for child in self.children(node):
throw_away.append(self.visit(child))
if self.mode == 'exclusive':
return al... |
remove nodes from a list
def reduce(self, body):
'''
remove nodes from a list
'''
i = 0
while i < len(body):
stmnt = body[i]
if self.visit(stmnt):
body.pop(i)
else:
i += 1 |
Loads WAV file(s) from a path.
Parameters
----------
path : str
Path to WAV files to be loaded.
with_path : bool, optional
Indicates whether a path column is added to the returned SFrame.
recursive : bool, optional
Indicates whether ``load_audio`` should do a recursive dir... |
Registers the given message type in the local database.
Calls to GetSymbol() and GetMessages() will return messages registered here.
Args:
message: a message.Message, to be registered.
Returns:
The provided message.
def RegisterMessage(self, message):
"""Registers the given message type ... |
Gets all registered messages from a specified file.
Only messages already created and registered will be returned; (this is the
case for imported _pb2 modules)
But unlike MessageFactory, this version also returns already defined nested
messages, but does not register any message extensions.
Args:
... |
String hash (djb2) with consistency between py2/py3 and persistency between runs (unlike `hash`).
def _string_hash(s):
"""String hash (djb2) with consistency between py2/py3 and persistency between runs (unlike `hash`)."""
h = 5381
for c in s:
h = h * 33 + ord(c)
return h |
Visualizes bounding boxes (ground truth or predictions) by
returning annotated copies of the images.
Parameters
----------
images: SArray or Image
An `SArray` of type `Image`. A single `Image` instance may also be
given.
annotations: SArray or list
An `SArray` of annotation... |
Create a :class:`~turicreate.toolkits.SupervisedLearningModel`,
This is generic function that allows you to create any model that
implements SupervisedLearningModel This function is normally not called, call
specific model's create function instead
Parameters
----------
dataset : SFrame
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.