Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 17 new columns ({'docstring_summary', 'path', 'argument_list', 'identifier', 'nwo', 'idx', 'no_docstring_code', 'language', 'parameters', 'url', 'function_tokens', 'function', 'docstring', 'score', 'sha', 'docstring_tokens', 'return_statement'}) and 5 missing columns ({'id_', 'query', 'task_name', 'negative', 'positive'}).
This happened while the json dataset builder was generating data using
hf://datasets/Denis641/AdvTestNodocstring/modified_test_new.jsonl (at revision e507f92e1342963d6e0c850362fc44526c14cd32)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
url: string
sha: string
docstring_summary: string
language: string
parameters: string
return_statement: string
argument_list: string
function_tokens: list<item: string>
child 0, item: string
function: string
path: string
identifier: string
docstring: string
docstring_tokens: list<item: string>
child 0, item: string
nwo: string
score: double
idx: int64
no_docstring_code: string
to
{'query': Value(dtype='string', id=None), 'positive': Value(dtype='string', id=None), 'id_': Value(dtype='int64', id=None), 'task_name': Value(dtype='string', id=None), 'negative': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1577, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1191, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 17 new columns ({'docstring_summary', 'path', 'argument_list', 'identifier', 'nwo', 'idx', 'no_docstring_code', 'language', 'parameters', 'url', 'function_tokens', 'function', 'docstring', 'score', 'sha', 'docstring_tokens', 'return_statement'}) and 5 missing columns ({'id_', 'query', 'task_name', 'negative', 'positive'}).
This happened while the json dataset builder was generating data using
hf://datasets/Denis641/AdvTestNodocstring/modified_test_new.jsonl (at revision e507f92e1342963d6e0c850362fc44526c14cd32)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
query string | positive string | id_ int64 | task_name string | negative string |
|---|---|---|---|---|
Return either the full or truncated version of a QIIME-formatted taxonomy string.
:type p: str
:param p: A QIIME-formatted taxonomy string: k__Foo; p__Bar; ...
:type level: str
:param level: The different level of identification are kingdom (k), phylum (p),
class (c),order (o), famil... | def split_phylogeny(p, level="s"):
level = level+"__"
result = p.split(level)
return result[0]+level+result[1].split(";")[0] | 0 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L159-L177 | def reset_local_buffers(self):
agent_ids = list(self.keys())
for k in agent_ids:
self[k].reset_agent() |
Check to make sure the supplied directory path does not exist, if so, create it. The
method catches OSError exceptions and returns a descriptive message instead of
re-raising the error.
:type d: str
:param d: It is the full path to a directory.
:return: Does not return anything, but creates a dire... | def ensure_dir(d):
if not os.path.exists(d):
try:
os.makedirs(d)
except OSError as oe:
# should not happen with os.makedirs
# ENOENT: No such file or directory
if os.errno == errno.ENOENT:
msg = twdd("""One or more directories in the pa... | 1 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L180-L206 | def on_change(self, value):
self._modifier(self.inst, self.prop, value) |
Takes either a file path or an open file handle, checks validity and returns an open
file handle or raises an appropriate Exception.
:type fnh: str
:param fnh: It is the full path to a file, or open file handle
:type mode: str
:param mode: The way in which this file will be used, for example to re... | def file_handle(fnh, mode="rU"):
handle = None
if isinstance(fnh, file):
if fnh.closed:
raise ValueError("Input file is closed.")
handle = fnh
elif isinstance(fnh, str):
handle = open(fnh, mode)
return handle | 2 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L209-L231 | def merge_partition_offsets(*partition_offsets):
output = dict()
for partition_offset in partition_offsets:
for partition, offset in six.iteritems(partition_offset):
prev_offset = output.get(partition, 0)
output[partition] = max(prev_offset, offset)
return output |
Find the user specified categories in the map and create a dictionary to contain the
relevant data for each type within the categories. Multiple categories will have their
types combined such that each possible combination will have its own entry in the
dictionary.
:type imap: dict
:param imap: The... | def gather_categories(imap, header, categories=None):
# If no categories provided, return all SampleIDs
if categories is None:
return {"default": DataCategory(set(imap.keys()), {})}
cat_ids = [header.index(cat)
for cat in categories if cat in header and "=" not in cat]
table = O... | 3 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L238-L309 | def elapsed_time_from(start_time):
time_then = make_time(start_time)
time_now = datetime.utcnow().replace(microsecond=0)
if time_then is None:
return
delta_t = time_now - time_then
return delta_t |
Parses the unifrac results file into a dictionary
:type unifracFN: str
:param unifracFN: The path to the unifrac results file
:rtype: dict
:return: A dictionary with keys: 'pcd' (principle coordinates data) which is a
dictionary of the data keyed by sample ID, 'eigvals' (eigenvalues), and... | def parse_unifrac(unifracFN):
with open(unifracFN, "rU") as uF:
first = uF.next().split("\t")
lines = [line.strip() for line in uF]
unifrac = {"pcd": OrderedDict(), "eigvals": [], "varexp": []}
if first[0] == "pc vector number":
return parse_unifrac_v1_8(unifrac, lines)
elif fir... | 4 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L311-L334 | def set_classes(self):
# Custom field classes on field wrapper
if self.attrs.get("_field_class"):
self.values["class"].append(escape(self.attrs.get("_field_class")))
# Inline class
if self.attrs.get("_inline"):
self.values["class"].append("inline")
# Disabled class
if self.field.field.disa... |
Function to parse data from older version of unifrac file obtained from Qiime version
1.8 and earlier.
:type unifrac: dict
:param unifracFN: The path to the unifrac results file
:type file_data: list
:param file_data: Unifrac data lines after stripping whitespace characters. | def parse_unifrac_v1_8(unifrac, file_data):
for line in file_data:
if line == "":
break
line = line.split("\t")
unifrac["pcd"][line[0]] = [float(e) for e in line[1:]]
unifrac["eigvals"] = [float(entry) for entry in file_data[-2].split("\t")[1:]]
unifrac["varexp"] = [floa... | 5 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L337-L356 | async def stop_bridges(self):
for task in self.sleep_tasks:
task.cancel()
for bridge in self.bridges:
bridge.stop() |
Function to parse data from newer version of unifrac file obtained from Qiime version
1.9 and later.
:type unifracFN: str
:param unifracFN: The path to the unifrac results file
:type file_data: list
:param file_data: Unifrac data lines after stripping whitespace characters. | def parse_unifrac_v1_9(unifrac, file_data):
unifrac["eigvals"] = [float(entry) for entry in file_data[0].split("\t")]
unifrac["varexp"] = [float(entry)*100 for entry in file_data[3].split("\t")]
for line in file_data[8:]:
if line == "":
break
line = line.split("\t")
unif... | 6 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L359-L378 | def is_charge_balanced(reaction):
charge = 0
for metabolite, coefficient in iteritems(reaction.metabolites):
if metabolite.charge is None:
return False
charge += coefficient * metabolite.charge
return charge == 0 |
Determine color-category mapping. If color_column was specified, then map the category
names to color values. Otherwise, use the palettable colors to automatically generate
a set of colors for the group values.
:type sample_map: dict
:param unifracFN: Map associating each line of the mapping file with ... | def color_mapping(sample_map, header, group_column, color_column=None):
group_colors = OrderedDict()
group_gather = gather_categories(sample_map, header, [group_column])
if color_column is not None:
color_gather = gather_categories(sample_map, header, [color_column])
# match sample IDs betw... | 7 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/util.py#L380-L419 | def is_balance_proof_safe_for_onchain_operations(
balance_proof: BalanceProofSignedState,
) -> bool:
total_amount = balance_proof.transferred_amount + balance_proof.locked_amount
return total_amount <= UINT256_MAX |
return reverse completment of read | def rev_c(read):
rc = []
rc_nucs = {'A':'T', 'T':'A', 'G':'C', 'C':'G', 'N':'N'}
for base in read:
rc.extend(rc_nucs[base.upper()])
return rc[::-1] | 8 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/shuffle_genome.py#L27-L35 | def hypermedia_out():
request = cherrypy.serving.request
request._hypermedia_inner_handler = request.handler
# If handler has been explicitly set to None, don't override.
if request.handler is not None:
request.handler = hypermedia_handler |
randomly shuffle genome | def shuffle_genome(genome, cat, fraction = float(100), plot = True, \
alpha = 0.1, beta = 100000, \
min_length = 1000, max_length = 200000):
header = '>randomized_%s' % (genome.name)
sequence = list(''.join([i[1] for i in parse_fasta(genome)]))
length = len(sequence)
shuffled = []
# ... | 9 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/shuffle_genome.py#L37-L87 | def GetEntries(self, parser_mediator, match=None, **unused_kwargs):
stores = match.get('Stores', {})
for volume_name, volume in iter(stores.items()):
datetime_value = volume.get('CreationDate', None)
if not datetime_value:
continue
partial_path = volume['PartialPath']
event_dat... |
If the fit contains statistically insignificant parameters, remove them.
Returns a pruned fit where all parameters have p-values of the t-statistic below p_max
Parameters
----------
fit: fm.ols fit object
Can contain insignificant parameters
p_max : float
... | def _prune(self, fit, p_max):
def remove_from_model_desc(x, model_desc):
"""
Return a model_desc without x
"""
rhs_termlist = []
for t in model_desc.rhs_termlist:
if not t.factors:
# intercept, add anyway
... | 10 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/regression.py#L222-L272 | def hexblock_word(cls, data, address = None,
bits = None,
separator = ' ',
width = 8):
... |
Return the best fit, based on rsquared | def find_best_rsquared(list_of_fits):
res = sorted(list_of_fits, key=lambda x: x.rsquared)
return res[-1] | 11 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/regression.py#L275-L278 | def _skip_trampoline(handler):
data_event, self = (yield None)
delegate = handler
event = None
depth = 0
while True:
def pass_through():
_trans = delegate.send(Transition(data_event, delegate))
return _trans, _trans.delegate, _trans.event
if data_event is not... |
Return a df with predictions and confidence interval
Notes
-----
The df will contain the following columns:
- 'predicted': the model output
- 'interval_u', 'interval_l': upper and lower confidence bounds.
The result will depend on the following attributes of self:
... | def _predict(self, fit, df):
# Add model results to data as column 'predictions'
df_res = df.copy()
if 'Intercept' in fit.model.exog_names:
df_res['Intercept'] = 1.0
df_res['predicted'] = fit.predict(df_res)
if not self.allow_negative_predictions:
df_res.... | 12 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/regression.py#L292-L338 | def detach(self, listener):
if listener in self.listeners:
self.listeners.remove(listener) |
Calculate the relative abundance of each OTUID in a Sample.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:type sampleIDs: list
:param sampleIDs: A list of sample id's from BIOM format OTU table.
:rtype: dict
:return: Returns a keyed on SampleIDs, and the values are dictionaries k... | def relative_abundance(biomf, sampleIDs=None):
if sampleIDs is None:
sampleIDs = biomf.ids()
else:
try:
for sid in sampleIDs:
assert sid in biomf.ids()
except AssertionError:
raise ValueError(
"\nError while calculating relative abu... | 13 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L11-L41 | def clean(jail=None,
chroot=None,
root=None,
clean_all=False,
dryrun=False):
opts = ''
if clean_all:
opts += 'a'
if dryrun:
opts += 'n'
else:
opts += 'y'
cmd = _pkg(jail, chroot, root)
cmd.append('clean')
if opts:
cmd.a... |
Calculate the mean OTU abundance percentage.
:type ra: Dict
:param ra: 'ra' refers to a dictionary keyed on SampleIDs, and the values are
dictionaries keyed on OTUID's and their values represent the relative
abundance of that OTUID in that SampleID. 'ra' is the output of
... | def mean_otu_pct_abundance(ra, otuIDs):
sids = ra.keys()
otumeans = defaultdict(int)
for oid in otuIDs:
otumeans[oid] = sum([ra[sid][oid] for sid in sids
if oid in ra[sid]]) / len(sids) * 100
return otumeans | 14 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L44-L67 | def change_frozen_attr(self):
# Selections are not supported
if self.grid.selection:
statustext = _("Freezing selections is not supported.")
post_command_event(self.main_window, self.StatusBarMsg,
text=statustext)
cursor = self.grid.actio... |
Calculate the mean relative abundance percentage.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:type sampleIDs: list
:param sampleIDs: A list of sample id's from BIOM format OTU table.
:param transform: Mathematical function which is used to transform smax to another
... | def MRA(biomf, sampleIDs=None, transform=None):
ra = relative_abundance(biomf, sampleIDs)
if transform is not None:
ra = {sample: {otuID: transform(abd) for otuID, abd in ra[sample].items()}
for sample in ra.keys()}
otuIDs = biomf.ids(axis="observation")
return mean_otu_pct_abundan... | 15 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L70-L92 | def close(self):
if self.device:
usb.util.dispose_resources(self.device)
self.device = None |
Calculate the total number of sequences in each OTU or SampleID.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:type sampleIDs: List
:param sampleIDs: A list of column id's from BIOM format OTU table. By default, the
list has been set to None.
:type sample_abd: B... | def raw_abundance(biomf, sampleIDs=None, sample_abd=True):
results = defaultdict(int)
if sampleIDs is None:
sampleIDs = biomf.ids()
else:
try:
for sid in sampleIDs:
assert sid in biomf.ids()
except AssertionError:
raise ValueError(
... | 16 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L95-L135 | def upgrade_api(request, client, version):
min_ver, max_ver = api_versions._get_server_version_range(client)
if min_ver <= api_versions.APIVersion(version) <= max_ver:
client = _nova.novaclient(request, version)
return client |
Function to transform the total abundance calculation for each sample ID to another
format based on user given transformation function.
:type biomf: A BIOM file.
:param biomf: OTU table format.
:param fn: Mathematical function which is used to transform smax to another format.
By defaul... | def transform_raw_abundance(biomf, fn=math.log10, sampleIDs=None, sample_abd=True):
totals = raw_abundance(biomf, sampleIDs, sample_abd)
return {sid: fn(abd) for sid, abd in totals.items()} | 17 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/phylotoast/biom_calc.py#L138-L155 | def close(self):
if self.device:
usb.util.dispose_resources(self.device)
self.device = None |
Compute the Mann-Whitney U test for unequal group sample sizes. | def print_MannWhitneyU(div_calc):
try:
x = div_calc.values()[0].values()
y = div_calc.values()[1].values()
except:
return "Error setting up input arrays for Mann-Whitney U Test. Skipping "\
"significance testing."
T, p = stats.mannwhitneyu(x, y)
print "\nMann-Whitn... | 18 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/diversity.py#L54-L66 | def ParseFileObject(self, parser_mediator, file_object):
try:
file_header = self._ReadFileHeader(file_object)
except (ValueError, errors.ParseError):
raise errors.UnableToParseFile('Unable to parse file header.')
tables = self._ReadTablesArray(file_object, file_header.tables_array_offset)
... |
Compute the Kruskal-Wallis H-test for independent samples. A typical rule is that
each group must have at least 5 measurements. | def print_KruskalWallisH(div_calc):
calc = defaultdict(list)
try:
for k1, v1 in div_calc.iteritems():
for k2, v2 in v1.iteritems():
calc[k1].append(v2)
except:
return "Error setting up input arrays for Kruskal-Wallis H-Test. Skipping "\
"significanc... | 19 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/diversity.py#L69-L84 | def _record_offset(self):
offset = self.blob_file.tell()
self.event_offsets.append(offset) |
Parses the given options passed in at the command line. | def handle_program_options():
parser = argparse.ArgumentParser(description="Calculate the alpha diversity\
of a set of samples using one or more \
metrics and output a kernal density \
estimator-smoothed h... | 20 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/diversity.py#L122-L168 | def fingerprint(self):
if self.num_vertices == 0:
return np.zeros(20, np.ubyte)
else:
return sum(self.vertex_fingerprints) |
make blast db | def blastdb(fasta, maxfile = 10000000):
db = fasta.rsplit('.', 1)[0]
type = check_type(fasta)
if type == 'nucl':
type = ['nhr', type]
else:
type = ['phr', type]
if os.path.exists('%s.%s' % (db, type[0])) is False \
and os.path.exists('%s.00.%s' % (db, type[0])) is False:
... | 21 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/search.py#L28-L46 | def writln(line, unit):
lineP = stypes.stringToCharP(line)
unit = ctypes.c_int(unit)
line_len = ctypes.c_int(len(line))
libspice.writln_(lineP, ctypes.byref(unit), line_len) |
make usearch db | def usearchdb(fasta, alignment = 'local', usearch_loc = 'usearch'):
if '.udb' in fasta:
print('# ... database found: %s' % (fasta), file=sys.stderr)
return fasta
type = check_type(fasta)
db = '%s.%s.udb' % (fasta.rsplit('.', 1)[0], type)
if os.path.exists(db) is False:
print('# .... | 22 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/search.py#L68-L85 | def point_to_line(point, segment_start, segment_end):
# TODO: Needs unittests.
segment_vec = segment_end - segment_start
# t is distance along line
t = -(segment_start - point).dot(segment_vec) / (
segment_vec.length_squared())
closest_point = segment_start + scale_v3(segment_vec, t)
... |
Pretty print. | def _pp(dict_data):
for key, val in dict_data.items():
# pylint: disable=superfluous-parens
print('{0:<11}: {1}'.format(key, val)) | 23 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L11-L15 | def _reserve(self, key):
self.assign(key, RESERVED)
try:
yield
finally:
del self._cache[key] |
Print licenses.
:param argparse.Namespace params: parameter
:param bootstrap_py.classifier.Classifiers metadata: package metadata | def print_licences(params, metadata):
if hasattr(params, 'licenses'):
if params.licenses:
_pp(metadata.licenses_desc())
sys.exit(0) | 24 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L27-L36 | def vrel(v1, v2):
v1 = stypes.toDoubleVector(v1)
v2 = stypes.toDoubleVector(v2)
return libspice.vrel_c(v1, v2) |
Check repository existence.
:param argparse.Namespace params: parameters | def check_repository_existence(params):
repodir = os.path.join(params.outdir, params.name)
if os.path.isdir(repodir):
raise Conflict(
'Package repository "{0}" has already exists.'.format(repodir)) | 25 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L39-L47 | def context(self):
stats = status_codes_by_date_stats()
attacks_data = [{
'type': 'line',
'zIndex': 9,
'name': _('Attacks'),
'data': [(v[0], v[1]['attacks'])
for v in stats]
}]
codes_data = [{
'zIndex': 4,... |
Generate package repository.
:param argparse.Namespace params: parameters | def generate_package(params):
pkg_data = package.PackageData(params)
pkg_tree = package.PackageTree(pkg_data)
pkg_tree.generate()
pkg_tree.move()
VCS(os.path.join(pkg_tree.outdir, pkg_tree.name), pkg_tree.pkg_data) | 26 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/control.py#L59-L68 | def startResponse(self, status, headers, excInfo=None):
self.status = status
self.headers = headers
self.reactor.callInThread(
responseInColor, self.request, status, headers
)
return self.write |
print single reads to stderr | def print_single(line, rev):
if rev is True:
seq = rc(['', line[9]])[1]
qual = line[10][::-1]
else:
seq = line[9]
qual = line[10]
fq = ['@%s' % line[0], seq, '+%s' % line[0], qual]
print('\n'.join(fq), file = sys.stderr) | 27 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/sam2fastq.py#L13-L24 | def set_cache_dir(directory):
global cache_dir
if directory is None:
cache_dir = None
return
if not os.path.exists(directory):
os.makedirs(directory)
if not os.path.isdir(directory):
raise ValueError("not a directory")
cache_dir = directory |
convert sam to fastq | def sam2fastq(sam, singles = False, force = False):
L, R = None, None
for line in sam:
if line.startswith('@') is True:
continue
line = line.strip().split()
bit = [True if i == '1' else False \
for i in bin(int(line[1])).split('b')[1][::-1]]
while len(... | 28 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/sam2fastq.py#L26-L78 | def cublasGetStream(handle):
id = ctypes.c_int()
status = _libcublas.cublasGetStream_v2(handle, ctypes.byref(id))
cublasCheckStatus(status)
return id.value |
sort sam file | def sort_sam(sam, sort):
tempdir = '%s/' % (os.path.abspath(sam).rsplit('/', 1)[0])
if sort is True:
mapping = '%s.sorted.sam' % (sam.rsplit('.', 1)[0])
if sam != '-':
if os.path.exists(mapping) is False:
os.system("\
sort -k1 --buffer-size=%sG -T ... | 29 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/subset_sam.py#L14-L37 | def ssn(self) -> str:
area = self.random.randint(1, 899)
if area == 666:
area = 665
return '{:03}-{:02}-{:04}'.format(
area,
self.random.randint(1, 99),
self.random.randint(1, 9999),
) |
randomly subset sam file | def sub_sam(sam, percent, sort = True, sbuffer = False):
mapping = sort_sam(sam, sort)
pool = [1 for i in range(0, percent)] + [0 for i in range(0, 100 - percent)]
c = cycle([1, 2])
for line in mapping:
line = line.strip().split()
if line[0].startswith('@'): # get the sam header
... | 30 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/subset_sam.py#L39-L60 | def get_max_port_count_for_storage_bus(self, bus):
if not isinstance(bus, StorageBus):
raise TypeError("bus can only be an instance of type StorageBus")
max_port_count = self._call("getMaxPortCountForStorageBus",
in_p=[bus])
return max_port_count |
convert fq to fa | def fq2fa(fq):
c = cycle([1, 2, 3, 4])
for line in fq:
n = next(c)
if n == 1:
seq = ['>%s' % (line.strip().split('@', 1)[1])]
if n == 2:
seq.append(line.strip())
yield seq | 31 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/fastq2fasta.py#L11-L22 | def query_under_condition(condition, kind='2'):
if DB_CFG['kind'] == 's':
return TabPost.select().where(
(TabPost.kind == kind) & (TabPost.valid == 1)
).order_by(
TabPost.time_update.desc()
)
return TabPost.select().where(
... |
Converts the returned value of wrapped function to the type of the
first arg or to the type specified by a kwarg key return_type's value. | def change_return_type(f):
@wraps(f)
def wrapper(*args, **kwargs):
if kwargs.has_key('return_type'):
return_type = kwargs['return_type']
kwargs.pop('return_type')
return return_type(f(*args, **kwargs))
elif len(args) > 0:
return_type = type(args[0]... | 32 | https://github.com/elbow-jason/Uno-deprecated/blob/4ad07d7b84e5b6e3e2b2c89db69448906f24b4e4/uno/decorators.py#L11-L27 | def _timing_representation(message):
s = _encode_to_binary_string(message, on="=", off=".")
N = len(s)
s += '\n' + _numbers_decades(N)
s += '\n' + _numbers_units(N)
s += '\n'
s += '\n' + _timing_char(message)
return s |
Converts all args to 'set' type via self.setify function. | def convert_args_to_sets(f):
@wraps(f)
def wrapper(*args, **kwargs):
args = (setify(x) for x in args)
return f(*args, **kwargs)
return wrapper | 33 | https://github.com/elbow-jason/Uno-deprecated/blob/4ad07d7b84e5b6e3e2b2c89db69448906f24b4e4/uno/decorators.py#L30-L38 | def list_publications():
publications = search_publications(
DBPublication(is_public=True)
)
return SimpleTemplate(INDEX_TEMPLATE).render(
publications=publications,
compose_path=web_tools.compose_path,
delimiter=":",
) |
Membuat objek-objek entri dari laman yang diambil.
:param laman: Laman respons yang dikembalikan oleh KBBI daring.
:type laman: Response | def _init_entri(self, laman):
sup = BeautifulSoup(laman.text, 'html.parser')
estr = ''
for label in sup.find('hr').next_siblings:
if label.name == 'hr':
self.entri.append(Entri(estr))
break
if label.name == 'h2':
if estr:
... | 34 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L46-L63 | def diff_text(candidate_config=None,
candidate_path=None,
running_config=None,
running_path=None,
saltenv='base'):
candidate_text = clean(config=candidate_config,
path=candidate_path,
saltenv=saltenv)
r... |
Memproses kata dasar yang ada dalam nama entri.
:param dasar: ResultSet untuk label HTML dengan class="rootword"
:type dasar: ResultSet | def _init_kata_dasar(self, dasar):
for tiap in dasar:
kata = tiap.find('a')
dasar_no = kata.find('sup')
kata = ambil_teks_dalam_label(kata)
self.kata_dasar.append(
kata + ' [{}]'.format(dasar_no.text.strip()) if dasar_no else kata
) | 35 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L126-L139 | def runWizard( self ):
plugin = self.currentPlugin()
if ( plugin and plugin.runWizard(self) ):
self.accept() |
Mengembalikan hasil serialisasi objek Entri ini.
:returns: Dictionary hasil serialisasi
:rtype: dict | def serialisasi(self):
return {
"nama": self.nama,
"nomor": self.nomor,
"kata_dasar": self.kata_dasar,
"pelafalan": self.pelafalan,
"bentuk_tidak_baku": self.bentuk_tidak_baku,
"varian": self.varian,
"makna": [makna.serialisasi... | 36 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L141-L156 | def escape_for_cmd_exe(arg):
meta_chars = '()%!^"<>&|'
meta_re = re.compile('(' + '|'.join(re.escape(char) for char in list(meta_chars)) + ')')
meta_map = {char: "^{0}".format(char) for char in meta_chars}
def escape_meta_chars(m):
char = m.group(1)
return meta_map[char]
return met... |
Mengembalikan representasi string untuk semua makna entri ini.
:returns: String representasi makna-makna
:rtype: str | def _makna(self):
if len(self.makna) > 1:
return '\n'.join(
str(i) + ". " + str(makna)
for i, makna in enumerate(self.makna, 1)
)
return str(self.makna[0]) | 37 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L158-L170 | def controlMsg(self, requestType, request, buffer, value = 0, index = 0, timeout = 100):
return self.dev.ctrl_transfer(
requestType,
request,
wValue = value,
wIndex = index,
data_or_wLength = buffer,
... |
Mengembalikan representasi string untuk nama entri ini.
:returns: String representasi nama entri
:rtype: str | def _nama(self):
hasil = self.nama
if self.nomor:
hasil += " [{}]".format(self.nomor)
if self.kata_dasar:
hasil = " » ".join(self.kata_dasar) + " » " + hasil
return hasil | 38 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L172-L184 | def _setup(self):
self.log.info("Adding reader to prepare to receive.")
self.loop.add_reader(self.dev.fd, self.read)
self.log.info("Flushing the RFXtrx buffer.")
self.flushSerialInput()
self.log.info("Writing the reset packet to the RFXtrx. (blocking)")
yield from self.... |
Mengembalikan representasi string untuk varian entri ini.
Dapat digunakan untuk "Varian" maupun "Bentuk tidak baku".
:param varian: List bentuk tidak baku atau varian
:type varian: list
:returns: String representasi varian atau bentuk tidak baku
:rtype: str | def _varian(self, varian):
if varian == self.bentuk_tidak_baku:
nama = "Bentuk tidak baku"
elif varian == self.varian:
nama = "Varian"
else:
return ''
return nama + ': ' + ', '.join(varian) | 39 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L186-L202 | def compile_all():
# print("Compiling for Qt: style.qrc -> style.rcc")
# os.system("rcc style.qrc -o style.rcc")
print("Compiling for PyQt4: style.qrc -> pyqt_style_rc.py")
os.system("pyrcc4 -py3 style.qrc -o pyqt_style_rc.py")
print("Compiling for PyQt5: style.qrc -> pyqt5_style_rc.py")
os.syst... |
Memproses kelas kata yang ada dalam makna.
:param makna_label: BeautifulSoup untuk makna yang ingin diproses.
:type makna_label: BeautifulSoup | def _init_kelas(self, makna_label):
kelas = makna_label.find(color='red')
lain = makna_label.find(color='darkgreen')
info = makna_label.find(color='green')
if kelas:
kelas = kelas.find_all('span')
if lain:
self.kelas = {lain.text.strip(): lain['title'].st... | 40 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L239-L259 | def sync(self, since=None, timeout_ms=30000, filter=None,
full_state=None, set_presence=None):
request = {
# non-integer timeouts appear to cause issues
"timeout": int(timeout_ms)
}
if since:
request["since"] = since
if filter:
... |
Memproses contoh yang ada dalam makna.
:param makna_label: BeautifulSoup untuk makna yang ingin diproses.
:type makna_label: BeautifulSoup | def _init_contoh(self, makna_label):
indeks = makna_label.text.find(': ')
if indeks != -1:
contoh = makna_label.text[indeks + 2:].strip()
self.contoh = contoh.split('; ')
else:
self.contoh = [] | 41 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L261-L273 | def delete_events(self, event_collection, timeframe=None, timezone=None, filters=None):
params = self.get_params(timeframe=timeframe, timezone=timezone, filters=filters)
return self.api.delete_events(event_collection, params) |
Mengembalikan hasil serialisasi objek Makna ini.
:returns: Dictionary hasil serialisasi
:rtype: dict | def serialisasi(self):
return {
"kelas": self.kelas,
"submakna": self.submakna,
"info": self.info,
"contoh": self.contoh
} | 42 | https://github.com/laymonage/kbbi-python/blob/1a52ba8bcc6dc4c5c1215f9e00207aca264287d6/kbbi/kbbi.py#L275-L287 | def diff_fromDelta(self, text1, delta):
diffs = []
pointer = 0 # Cursor in text1
tokens = delta.split("\t")
for token in tokens:
if token == "":
# Blank tokens are ok (from a trailing \t).
continue
# Each token begins with a one character parameter which specifies the
... |
Build sphinx documentation.
:rtype: int
:return: subprocess.call return code
:param `bootstrap_py.control.PackageData` pkg_data: package meta data
:param str projectdir: project root directory | def build_sphinx(pkg_data, projectdir):
try:
version, _minor_version = pkg_data.version.rsplit('.', 1)
except ValueError:
version = pkg_data.version
args = ' '.join(('sphinx-quickstart',
'--sep',
'-q',
'-p "{name}"',
... | 43 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/docs.py#L8-L40 | def cli(env, volume_id):
file_manager = SoftLayer.FileStorageManager(env.client)
snapshot_schedules = file_manager.list_volume_schedules(volume_id)
table = formatting.Table(['id',
'active',
'type',
'replication',
... |
make bowtie db | def bowtiedb(fa, keepDB):
btdir = '%s/bt2' % (os.getcwd())
# make directory for
if not os.path.exists(btdir):
os.mkdir(btdir)
btdb = '%s/%s' % (btdir, fa.rsplit('/', 1)[-1])
if keepDB is True:
if os.path.exists('%s.1.bt2' % (btdb)):
return btdb
p = subprocess.Popen('b... | 44 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/crossmap.py#L16-L31 | def parse_resource_data_entry(self, rva):
try:
# If the RVA is invalid all would blow up. Some EXEs seem to be
# specially nasty and have an invalid RVA.
data = self.get_data(rva, Structure(self.__IMAGE_RESOURCE_DATA_ENTRY_format__).sizeof() )
except PEFormatError as... |
generate bowtie2 command | def bowtie(sam, btd, f, r, u, opt, no_shrink, threads):
bt2 = 'bowtie2 -x %s -p %s ' % (btd, threads)
if f is not False:
bt2 += '-1 %s -2 %s ' % (f, r)
if u is not False:
bt2 += '-U %s ' % (u)
bt2 += opt
if no_shrink is False:
if f is False:
bt2 += ' | shrinksam -... | 45 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/crossmap.py#L33-L50 | def format_string(self, s, args, kwargs):
if isinstance(s, Markup):
formatter = SandboxedEscapeFormatter(self, s.escape)
else:
formatter = SandboxedFormatter(self)
kwargs = _MagicFormatMapping(args, kwargs)
rv = formatter.vformat(s, args, kwargs)
return ty... |
map all read sets against all fasta files | def crossmap(fas, reads, options, no_shrink, keepDB, threads, cluster, nodes):
if cluster is True:
threads = '48'
btc = []
for fa in fas:
btd = bowtiedb(fa, keepDB)
F, R, U = reads
if F is not False:
if U is False:
u = False
for i, f in... | 46 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/crossmap.py#L55-L96 | def with_division(self, division):
if division is None:
division = ''
division = slugify(division)
self._validate_division(division)
self.division = division
return self |
Returns a connection object from the router given ``args``.
Useful in cases where a connection cannot be automatically determined
during all steps of the process. An example of this would be
Redis pipelines. | def get_conn(self, *args, **kwargs):
connections = self.__connections_for('get_conn', args=args, kwargs=kwargs)
if len(connections) is 1:
return connections[0]
else:
return connections | 47 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/base.py#L100-L113 | def write_comment(self, comment):
self._FITS.write_comment(self._ext+1, str(comment)) |
return the non-direct init if the direct algorithm has been selected. | def __get_nondirect_init(self, init):
crc = init
for i in range(self.Width):
bit = crc & 0x01
if bit:
crc^= self.Poly
crc >>= 1
if bit:
crc |= self.MSB_Mask
return crc & self.Mask | 48 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L98-L110 | def from_bytes(self, string):
msg = srsly.msgpack_loads(gzip.decompress(string))
self.attrs = msg["attrs"]
self.strings = set(msg["strings"])
lengths = numpy.fromstring(msg["lengths"], dtype="int32")
flat_spaces = numpy.fromstring(msg["spaces"], dtype=bool)
flat_tokens = ... |
reflect a data word, i.e. reverts the bit order. | def reflect(self, data, width):
x = data & 0x01
for i in range(width - 1):
data >>= 1
x = (x << 1) | (data & 0x01)
return x | 49 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L115-L123 | def linkify_templates(self):
self.hosts.linkify_templates()
self.contacts.linkify_templates()
self.services.linkify_templates()
self.servicedependencies.linkify_templates()
self.hostdependencies.linkify_templates()
self.timeperiods.linkify_templates()
self.hostsex... |
Classic simple and slow CRC implementation. This function iterates bit
by bit over the augmented input message and returns the calculated CRC
value at the end. | def bit_by_bit(self, in_data):
# If the input data is a string, convert to bytes.
if isinstance(in_data, str):
in_data = [ord(c) for c in in_data]
register = self.NonDirectInit
for octet in in_data:
if self.ReflectIn:
octet = self.reflect(octet, 8... | 50 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L128-L156 | def create_cvmfs_persistent_volume_claim(cvmfs_volume):
from kubernetes.client.rest import ApiException
from reana_commons.k8s.api_client import current_k8s_corev1_api_client
try:
current_k8s_corev1_api_client.\
create_namespaced_persistent_volume_claim(
"default",
... |
This function generates the CRC table used for the table_driven CRC
algorithm. The Python version cannot handle tables of an index width
other than 8. See the generated C code for tables with different sizes
instead. | def gen_table(self):
table_length = 1 << self.TableIdxWidth
tbl = [0] * table_length
for i in range(table_length):
register = i
if self.ReflectIn:
register = self.reflect(register, self.TableIdxWidth)
register = register << (self.Width - self.T... | 51 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L190-L212 | def cancel_broadcast(self, broadcast_guid):
subpath = 'broadcasts/%s/update' % broadcast_guid
broadcast = {'status': 'CANCELED'}
bcast_dict = self._call(subpath, method='POST', data=broadcast,
content_type='application/json')
return bcast_dict |
The Standard table_driven CRC algorithm. | def table_driven(self, in_data):
# If the input data is a string, convert to bytes.
if isinstance(in_data, str):
in_data = [ord(c) for c in in_data]
tbl = self.gen_table()
register = self.DirectInit << self.CrcShift
if not self.ReflectIn:
for octet in in... | 52 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/_crc_algorithms.py#L217-L242 | def _prune_subdirs(dir_: str) -> None:
for logdir in [path.join(dir_, f) for f in listdir(dir_) if is_train_dir(path.join(dir_, f))]:
for subdir in [path.join(logdir, f) for f in listdir(logdir) if path.isdir(path.join(logdir, f))]:
_safe_rmtree(subdir) |
parse masked sequence into non-masked and masked regions | def parse_masked(seq, min_len):
nm, masked = [], [[]]
prev = None
for base in seq[1]:
if base.isupper():
nm.append(base)
if masked != [[]] and len(masked[-1]) < min_len:
nm.extend(masked[-1])
del masked[-1]
prev = False
elif... | 53 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/strip_masked.py#L13-L31 | def handle_event(self, event):
subscription_id = event.subscription_id
if subscription_id in self._subscriptions:
# FIXME: [1] should be a constant
handler = self._subscriptions[subscription_id][SUBSCRIPTION_CALLBACK]
WampSubscriptionWrapper(self,handler,event).start(... |
remove masked regions from fasta file as long as
they are longer than min_len | def strip_masked(fasta, min_len, print_masked):
for seq in parse_fasta(fasta):
nm, masked = parse_masked(seq, min_len)
nm = ['%s removed_masked >=%s' % (seq[0], min_len), ''.join(nm)]
yield [0, nm]
if print_masked is True:
for i, m in enumerate([i for i in masked if i != ... | 54 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/strip_masked.py#L33-L45 | def websocket_connect(self, message):
self.session_id = self.scope['url_route']['kwargs']['subscriber_id']
super().websocket_connect(message)
# Create new subscriber object.
Subscriber.objects.get_or_create(session_id=self.session_id) |
Return arcsine transformed relative abundance from a BIOM format file.
:type biomfile: BIOM format file
:param biomfile: BIOM format file used to obtain relative abundances for each OTU in
a SampleID, which are used as node sizes in network plots.
:type return: Dictionary of dictionar... | def get_relative_abundance(biomfile):
biomf = biom.load_table(biomfile)
norm_biomf = biomf.norm(inplace=False)
rel_abd = {}
for sid in norm_biomf.ids():
rel_abd[sid] = {}
for otuid in norm_biomf.ids("observation"):
otuname = oc.otu_name(norm_biomf.metadata(otuid, axis="observ... | 55 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/network_plots_gephi.py#L33-L57 | def update_context(self,
context,
update_mask=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None):
# Wrap the transport method to ad... |
Find an OTU ID in a Newick-format tree.
Return the starting position of the ID or None if not found. | def find_otu(otuid, tree):
for m in re.finditer(otuid, tree):
before, after = tree[m.start()-1], tree[m.start()+len(otuid)]
if before in ["(", ",", ")"] and after in [":", ";"]:
return m.start()
return None | 56 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/iTol.py#L17-L26 | def set_keyspace(self, keyspace):
self.keyspace = keyspace
dfrds = []
for p in self._protos:
dfrds.append(p.submitRequest(ManagedThriftRequest(
'set_keyspace', keyspace)))
return defer.gatherResults(dfrds) |
Replace the OTU ids in the Newick phylogenetic tree format with truncated
OTU names | def newick_replace_otuids(tree, biomf):
for val, id_, md in biomf.iter(axis="observation"):
otu_loc = find_otu(id_, tree)
if otu_loc is not None:
tree = tree[:otu_loc] + \
oc.otu_name(md["taxonomy"]) + \
tree[otu_loc + len(id_):]
return tree | 57 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/iTol.py#L29-L40 | def led_changed(self, addr, group, val):
_LOGGER.debug("Button %d LED changed from %d to %d",
self._group, self._value, val)
led_on = bool(val)
if led_on != bool(self._value):
self._update_subscribers(int(led_on)) |
return genome info for choosing representative
if ggKbase table provided - choose rep based on SCGs and genome length
- priority for most SCGs - extra SCGs, then largest genome
otherwise, based on largest genome | def genome_info(genome, info):
try:
scg = info['#SCGs']
dups = info['#SCG duplicates']
length = info['genome size (bp)']
return [scg - dups, length, genome]
except:
return [False, False, info['genome size (bp)'], genome] | 58 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L97-L112 | def reset_env(exclude=[]):
if os.getenv(env.INITED):
wandb_keys = [key for key in os.environ.keys() if key.startswith(
'WANDB_') and key not in exclude]
for key in wandb_keys:
del os.environ[key]
return True
else:
return False |
choose represenative genome and
print cluster information
*if ggKbase table is provided, use SCG info to choose best genome | def print_clusters(fastas, info, ANI):
header = ['#cluster', 'num. genomes', 'rep.', 'genome', '#SCGs', '#SCG duplicates', \
'genome size (bp)', 'fragments', 'list']
yield header
in_cluster = []
for cluster_num, cluster in enumerate(connected_components(ANI)):
cluster = sorted([genom... | 59 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L114-L163 | def _apply_index_days(self, i, roll):
nanos = (roll % 2) * Timedelta(days=self.day_of_month - 1).value
return i + nanos.astype('timedelta64[ns]') |
convert ggKbase genome info tables to dictionary | def parse_ggKbase_tables(tables, id_type):
g2info = {}
for table in tables:
for line in open(table):
line = line.strip().split('\t')
if line[0].startswith('name'):
header = line
header[4] = 'genome size (bp)'
header[12] = '#SCGs'
... | 60 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L174-L213 | def _cleanSessions(self):
tooOld = extime.Time() - timedelta(seconds=PERSISTENT_SESSION_LIFETIME)
self.store.query(
PersistentSession,
PersistentSession.lastUsed < tooOld).deleteFromStore()
self._lastClean = self._clock.seconds() |
convert checkM genome info tables to dictionary | def parse_checkM_tables(tables):
g2info = {}
for table in tables:
for line in open(table):
line = line.strip().split('\t')
if line[0].startswith('Bin Id'):
header = line
header[8] = 'genome size (bp)'
header[5] = '#SCGs'
... | 61 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L215-L235 | def slanted_triangular(max_rate, num_steps, cut_frac=0.1, ratio=32, decay=1, t=0.0):
cut = int(num_steps * cut_frac)
while True:
t += 1
if t < cut:
p = t / cut
else:
p = 1 - ((t - cut) / (cut * (1 / cut_frac - 1)))
learn_rate = max_rate * (1 + p * (ratio -... |
get genome lengths | def genome_lengths(fastas, info):
if info is False:
info = {}
for genome in fastas:
name = genome.rsplit('.', 1)[0].rsplit('/', 1)[-1].rsplit('.contigs')[0]
if name in info:
continue
length = 0
fragments = 0
for seq in parse_fasta(genome):
... | 62 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/cluster_ani.py#L237-L253 | def save_reg(data):
reg_dir = _reg_dir()
regfile = os.path.join(reg_dir, 'register')
try:
if not os.path.exists(reg_dir):
os.makedirs(reg_dir)
except OSError as exc:
if exc.errno == errno.EEXIST:
pass
else:
raise
try:
with salt.util... |
Returns a list of db keys to route the given call to.
:param attr: Name of attribute being called on the connection.
:param args: List of arguments being passed to ``attr``.
:param kwargs: Dictionary of keyword arguments being passed to ``attr``.
>>> redis = Cluster(router=BaseRouter)
... | def get_dbs(self, attr, args, kwargs, **fkwargs):
if not self._ready:
if not self.setup_router(args=args, kwargs=kwargs, **fkwargs):
raise self.UnableToSetupRouter()
retval = self._pre_routing(attr=attr, args=args, kwargs=kwargs, **fkwargs)
if retval is not None:
... | 63 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L50-L81 | def fullLoad(self):
self._parseDirectories(self.ntHeaders.optionalHeader.dataDirectory, self.PE_TYPE) |
Call method to perform any setup | def setup_router(self, args, kwargs, **fkwargs):
self._ready = self._setup_router(args=args, kwargs=kwargs, **fkwargs)
return self._ready | 64 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L87-L93 | def world_series_logs():
file_name = 'GLWS.TXT'
z = get_zip_file(world_series_url)
data = pd.read_csv(z.open(file_name), header=None, sep=',', quotechar='"')
data.columns = gamelog_columns
return data |
Perform routing and return db_nums | def _route(self, attr, args, kwargs, **fkwargs):
return self.cluster.hosts.keys() | 65 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L111-L115 | def static_stability(pressure, temperature, axis=0):
theta = potential_temperature(pressure, temperature)
return - mpconsts.Rd * temperature / pressure * first_derivative(np.log(theta / units.K),
x=pressure, axis=axis) |
Iterates through all connections which were previously listed as unavailable
and marks any that have expired their retry_timeout as being up. | def check_down_connections(self):
now = time.time()
for db_num, marked_down_at in self._down_connections.items():
if marked_down_at + self.retry_timeout <= now:
self.mark_connection_up(db_num) | 66 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L175-L184 | def _CalculateDigestHash(self, file_entry, data_stream_name):
file_object = file_entry.GetFileObject(data_stream_name=data_stream_name)
if not file_object:
return None
try:
file_object.seek(0, os.SEEK_SET)
hasher_object = hashers_manager.HashersManager.GetHasher('sha256')
data = f... |
Marks all connections which were previously listed as unavailable as being up. | def flush_down_connections(self):
self._get_db_attempts = 0
for db_num in self._down_connections.keys():
self.mark_connection_up(db_num) | 67 | https://github.com/disqus/nydus/blob/9b505840da47a34f758a830c3992fa5dcb7bb7ad/nydus/db/routers/base.py#L186-L192 | def _CalculateDigestHash(self, file_entry, data_stream_name):
file_object = file_entry.GetFileObject(data_stream_name=data_stream_name)
if not file_object:
return None
try:
file_object.seek(0, os.SEEK_SET)
hasher_object = hashers_manager.HashersManager.GetHasher('sha256')
data = f... |
Compute standby power
Parameters
----------
df : pandas.DataFrame or pandas.Series
Electricity Power
resolution : str, default='d'
Resolution of the computation. Data will be resampled to this resolution (as mean) before computation
of the minimum.
String that can be pa... | def standby(df, resolution='24h', time_window=None):
if df.empty:
raise EmptyDataFrame()
df = pd.DataFrame(df) # if df was a pd.Series, convert to DataFrame
def parse_time(t):
if isinstance(t, numbers.Number):
return pd.Timestamp.utcfromtimestamp(t).time()
else:
... | 68 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L72-L115 | def retract(self):
if lib.EnvRetract(self._env, self._fact) != 1:
raise CLIPSError(self._env) |
Compute the share of the standby power in the total consumption.
Parameters
----------
df : pandas.DataFrame or pandas.Series
Power (typically electricity, can be anything)
resolution : str, default='d'
Resolution of the computation. Data will be resampled to this resolution (as mean) ... | def share_of_standby(df, resolution='24h', time_window=None):
p_sb = standby(df, resolution, time_window)
df = df.resample(resolution).mean()
p_tot = df.sum()
p_standby = p_sb.sum()
share_standby = p_standby / p_tot
res = share_standby.iloc[0]
return res | 69 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L118-L146 | def bulk_delete(handler, request):
ids = request.GET.getall('ids')
Message.delete().where(Message.id << ids).execute()
raise muffin.HTTPFound(handler.url) |
Toggle counter for gas boilers
Counts the number of times the gas consumption increases with more than 3kW
Parameters
----------
ts: Pandas Series
Gas consumption in minute resolution
Returns
-------
int | def count_peaks(ts):
on_toggles = ts.diff() > 3000
shifted = np.logical_not(on_toggles.shift(1))
result = on_toggles & shifted
count = result.sum()
return count | 70 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L149-L169 | def FindProxies():
sc = objc.SystemConfiguration()
# Get the dictionary of network proxy settings
settings = sc.dll.SCDynamicStoreCopyProxies(None)
if not settings:
return []
try:
cf_http_enabled = sc.CFDictRetrieve(settings, "kSCPropNetProxiesHTTPEnable")
if cf_http_enabled and bool(sc.CFNumTo... |
Calculate the ratio of input vs. norm over a given interval.
Parameters
----------
ts : pandas.Series
timeseries
resolution : str, optional
interval over which to calculate the ratio
default: resolution of the input timeseries
norm : int | float, optional
denominator... | def load_factor(ts, resolution=None, norm=None):
if norm is None:
norm = ts.max()
if resolution is not None:
ts = ts.resample(rule=resolution).mean()
lf = ts / norm
return lf | 71 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/analysis.py#L172-L199 | def inject_basic_program(self, ascii_listing):
program_start = self.cpu.memory.read_word(
self.machine_api.PROGRAM_START_ADDR
)
tokens = self.machine_api.ascii_listing2program_dump(ascii_listing)
self.cpu.memory.load(program_start, tokens)
log.critical("BASIC program ... |
get top hits after sorting by column number | def top_hits(hits, num, column, reverse):
hits.sort(key = itemgetter(column), reverse = reverse)
for hit in hits[0:num]:
yield hit | 72 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L17-L23 | def weld_udf(weld_template, mapping):
weld_obj = create_empty_weld_object()
for k, v in mapping.items():
if isinstance(v, (np.ndarray, WeldObject)):
obj_id = get_weld_obj_id(weld_obj, v)
mapping.update({k: obj_id})
weld_obj.weld_code = weld_template.format(**mapping)
r... |
parse b6 output with sorting | def numBlast_sort(blast, numHits, evalueT, bitT):
header = ['#query', 'target', 'pident', 'alen', 'mismatch', 'gapopen',
'qstart', 'qend', 'tstart', 'tend', 'evalue', 'bitscore']
yield header
hmm = {h:[] for h in header}
for line in blast:
if line.startswith('#'):
conti... | 73 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L25-L50 | def delete(self):
try:
return self._server.query('/library/sections/%s' % self.key, method=self._server._session.delete)
except BadRequest: # pragma: no cover
msg = 'Failed to delete library %s' % self.key
msg += 'You may need to allow this permission in your Plex se... |
parse b6 output | def numBlast(blast, numHits, evalueT = False, bitT = False, sort = False):
if sort is True:
for hit in numBlast_sort(blast, numHits, evalueT, bitT):
yield hit
return
header = ['#query', 'target', 'pident', 'alen', 'mismatch', 'gapopen',
'qstart', 'qend', 'tstart', 'tend... | 74 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L52-L85 | def CloseHandle(self):
if hasattr(self, 'handle'):
ret = vmGuestLib.VMGuestLib_CloseHandle(self.handle.value)
if ret != VMGUESTLIB_ERROR_SUCCESS: raise VMGuestLibException(ret)
del(self.handle) |
parse hmm domain table output
this version is faster but does not work unless the table is sorted | def numDomtblout(domtblout, numHits, evalueT, bitT, sort):
if sort is True:
for hit in numDomtblout_sort(domtblout, numHits, evalueT, bitT):
yield hit
return
header = ['#target name', 'target accession', 'tlen',
'query name', 'query accession', 'qlen',
'fu... | 75 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/besthits.py#L121-L168 | def account_unblock(self, id):
id = self.__unpack_id(id)
url = '/api/v1/accounts/{0}/unblock'.format(str(id))
return self.__api_request('POST', url) |
convert stockholm to fasta | def stock2fa(stock):
seqs = {}
for line in stock:
if line.startswith('#') is False and line.startswith(' ') is False and len(line) > 3:
id, seq = line.strip().split()
id = id.rsplit('/', 1)[0]
id = re.split('[0-9]\|', id, 1)[-1]
if id not in seqs:
... | 76 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/stockholm2fa.py#L11-L26 | def put_lifecycle_configuration(Bucket,
Rules,
region=None, key=None, keyid=None, profile=None):
try:
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
if Rules is not None and isinstance(Rules, six.string_types):
Rules = salt.utils.json.loads(... |
Return boolean time series following given week schedule.
Parameters
----------
index : pandas.DatetimeIndex
Datetime index
on_time : str or datetime.time
Daily opening time. Default: '09:00'
off_time : str or datetime.time
Daily closing time. Default: '17:00'
off_days :... | def week_schedule(index, on_time=None, off_time=None, off_days=None):
if on_time is None:
on_time = '9:00'
if off_time is None:
off_time = '17:00'
if off_days is None:
off_days = ['Sunday', 'Monday']
if not isinstance(on_time, datetime.time):
on_time = pd.to_datetime(on_t... | 77 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/utils.py#L10-L47 | def deregister_entity_from_group(self, entity, group):
if entity in self._entities:
if entity in self._groups[group]:
self._groups[group].remove(entity)
else:
raise UnmanagedEntityError(entity) |
Draw a carpet plot of a pandas timeseries.
The carpet plot reads like a letter. Every day one line is added to the
bottom of the figure, minute for minute moving from left (morning) to right
(evening).
The color denotes the level of consumption and is scaled logarithmically.
If vmin and vmax are no... | def carpet(timeseries, **kwargs):
# define optional input parameters
cmap = kwargs.pop('cmap', cm.coolwarm)
norm = kwargs.pop('norm', LogNorm())
interpolation = kwargs.pop('interpolation', 'nearest')
cblabel = kwargs.pop('zlabel', timeseries.name if timeseries.name else '')
title = kwargs.pop('... | 78 | https://github.com/opengridcc/opengrid/blob/69b8da3c8fcea9300226c45ef0628cd6d4307651/opengrid/library/plotting.py#L34-L125 | def num_gpus():
count = ctypes.c_int()
check_call(_LIB.MXGetGPUCount(ctypes.byref(count)))
return count.value |
calculate percent identity | def calc_pident_ignore_gaps(a, b):
m = 0 # matches
mm = 0 # mismatches
for A, B in zip(list(a), list(b)):
if A == '-' or A == '.' or B == '-' or B == '.':
continue
if A == B:
m += 1
else:
mm += 1
try:
return float(float(m)/float((m + mm... | 79 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L34-L50 | def get_local_file(file):
try:
with open(file.path):
yield file.path
except NotImplementedError:
_, ext = os.path.splitext(file.name)
with NamedTemporaryFile(prefix='wagtailvideo-', suffix=ext) as tmp:
try:
file.open('rb')
for chunk... |
skip column if either is a gap | def remove_gaps(A, B):
a_seq, b_seq = [], []
for a, b in zip(list(A), list(B)):
if a == '-' or a == '.' or b == '-' or b == '.':
continue
a_seq.append(a)
b_seq.append(b)
return ''.join(a_seq), ''.join(b_seq) | 80 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L52-L62 | def read_legacy(filename):
reader = vtk.vtkDataSetReader()
reader.SetFileName(filename)
# Ensure all data is fetched with poorly formated legacy files
reader.ReadAllScalarsOn()
reader.ReadAllColorScalarsOn()
reader.ReadAllNormalsOn()
reader.ReadAllTCoordsOn()
reader.ReadAllVectorsOn()
... |
compare pairs of sequences | def compare_seqs(seqs):
A, B, ignore_gaps = seqs
a, b = A[1], B[1] # actual sequences
if len(a) != len(b):
print('# reads are not the same length', file=sys.stderr)
exit()
if ignore_gaps is True:
pident = calc_pident_ignore_gaps(a, b)
else:
pident = calc_pident(a, b)
... | 81 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L64-L77 | def get_s3_origin_conf_class():
if LooseVersion(troposphere.__version__) > LooseVersion('2.4.0'):
return cloudfront.S3OriginConfig
if LooseVersion(troposphere.__version__) == LooseVersion('2.4.0'):
return S3OriginConfig
return cloudfront.S3Origin |
calculate Levenshtein ratio of sequences | def compare_seqs_leven(seqs):
A, B, ignore_gaps = seqs
a, b = remove_gaps(A[1], B[1]) # actual sequences
if len(a) != len(b):
print('# reads are not the same length', file=sys.stderr)
exit()
pident = lr(a, b) * 100
return A[0], B[0], pident | 82 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L79-L89 | def del_unused_keyframes(self):
skl = self.key_frame_list.sorted_key_list()
unused_keys = [k for k in self.dct['keys']
if k not in skl]
for k in unused_keys:
del self.dct['keys'][k] |
make pairwise sequence comparisons between aligned sequences | def pairwise_compare(afa, leven, threads, print_list, ignore_gaps):
# load sequences into dictionary
seqs = {seq[0]: seq for seq in nr_fasta([afa], append_index = True)}
num_seqs = len(seqs)
# define all pairs
pairs = ((i[0], i[1], ignore_gaps) for i in itertools.combinations(list(seqs.values()), 2)... | 83 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L91-L110 | def makeCubiccFunc(self,mNrm,cNrm):
EndOfPrdvPP = self.DiscFacEff*self.Rfree*self.Rfree*self.PermGroFac**(-self.CRRA-1.0)* \
np.sum(self.PermShkVals_temp**(-self.CRRA-1.0)*
self.vPPfuncNext(self.mNrmNext)*self.ShkPrbs_temp,axis=0)
dcda = EndOfPrd... |
print matrix of pidents to stdout | def print_pairwise(pw, median = False):
names = sorted(set([i for i in pw]))
if len(names) != 0:
if '>' in names[0]:
yield ['#'] + [i.split('>')[1] for i in names if '>' in i]
else:
yield ['#'] + names
for a in names:
if '>' in a:
yield... | 84 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L132-L155 | def delete_index(self, cardinality):
DatabaseConnector.delete_index(self, cardinality)
query = "DROP INDEX IF EXISTS idx_{0}_gram_varchar;".format(cardinality)
self.execute_sql(query)
query = "DROP INDEX IF EXISTS idx_{0}_gram_normalized_varchar;".format(
cardinality)
... |
print stats for comparisons | def print_comps(comps):
if comps == []:
print('n/a')
else:
print('# min: %s, max: %s, mean: %s' % \
(min(comps), max(comps), np.mean(comps))) | 85 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L157-L165 | def _load_vertex_buffers(self):
fd = gzip.open(cache_name(self.file_name), 'rb')
for buff in self.meta.vertex_buffers:
mat = self.wavefront.materials.get(buff['material'])
if not mat:
mat = Material(name=buff['material'], is_default=True)
self.wa... |
print min. pident within each clade and then matrix of between-clade max. | def compare_clades(pw):
names = sorted(set([i for i in pw]))
for i in range(0, 4):
wi, bt = {}, {}
for a in names:
for b in pw[a]:
if ';' not in a or ';' not in b:
continue
pident = pw[a][b]
cA, cB = a.split(';')[i],... | 86 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L167-L216 | def setGroups(self, groups, kerningGroupConversionRenameMaps=None):
skipping = []
for name, members in groups.items():
checked = []
for m in members:
if m in self.font:
checked.append(m)
else:
skipping.append... |
convert matrix to dictionary of comparisons | def matrix2dictionary(matrix):
pw = {}
for line in matrix:
line = line.strip().split('\t')
if line[0].startswith('#'):
names = line[1:]
continue
a = line[0]
for i, pident in enumerate(line[1:]):
b = names[i]
if a not in pw:
... | 87 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/compare_aligned.py#L218-L239 | def _sampleLocationOnSide(self):
z = random.uniform(-1, 1) * self.height / 2.
sampledAngle = 2 * random.random() * pi
x, y = self.radius * cos(sampledAngle), self.radius * sin(sampledAngle)
return [x, y, z] |
Set argument parser option. | def setoption(parser, metadata=None):
parser.add_argument('-v', action='version',
version=__version__)
subparsers = parser.add_subparsers(help='sub commands help')
create_cmd = subparsers.add_parser('create')
create_cmd.add_argument('name',
help='Speci... | 88 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/commands.py#L12-L51 | def swd_sync(self, pad=False):
if pad:
self._dll.JLINK_SWD_SyncBytes()
else:
self._dll.JLINK_SWD_SyncBits()
return None |
Parse argument options. | def parse_options(metadata):
parser = argparse.ArgumentParser(description='%(prog)s usage:',
prog=__prog__)
setoption(parser, metadata=metadata)
return parser | 89 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/commands.py#L72-L77 | def vrel(v1, v2):
v1 = stypes.toDoubleVector(v1)
v2 = stypes.toDoubleVector(v2)
return libspice.vrel_c(v1, v2) |
Execute main processes. | def main():
try:
pkg_version = Update()
if pkg_version.updatable():
pkg_version.show_message()
metadata = control.retreive_metadata()
parser = parse_options(metadata)
argvs = sys.argv
if len(argvs) <= 1:
parser.print_help()
sys.exit... | 90 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/commands.py#L80-L99 | def price(self, minimum: float = 10.00,
maximum: float = 1000.00) -> str:
price = self.random.uniform(minimum, maximum, precision=2)
return '{0} {1}'.format(price, self.currency_symbol()) |
Check key and set default vaule when it does not exists. | def _check_or_set_default_params(self):
if not hasattr(self, 'date'):
self._set_param('date', datetime.utcnow().strftime('%Y-%m-%d'))
if not hasattr(self, 'version'):
self._set_param('version', self.default_version)
# pylint: disable=no-member
if not hasattr(self,... | 91 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/package.py#L44-L52 | def _get_files_modified():
cmd = "git diff-index --cached --name-only --diff-filter=ACMRTUXB HEAD"
_, files_modified, _ = run(cmd)
extensions = [re.escape(ext) for ext in list(SUPPORTED_FILES) + [".rst"]]
test = "(?:{0})$".format("|".join(extensions))
return list(filter(lambda f: re.search(test, f)... |
Move directory from working directory to output directory. | def move(self):
if not os.path.isdir(self.outdir):
os.makedirs(self.outdir)
shutil.move(self.tmpdir, os.path.join(self.outdir, self.name)) | 92 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/package.py#L169-L173 | def init(*args, **kwargs):
global _initial_client
client = Client(*args, **kwargs)
Hub.current.bind_client(client)
rv = _InitGuard(client)
if client is not None:
_initial_client = weakref.ref(client)
return rv |
Initialize VCS repository. | def vcs_init(self):
VCS(os.path.join(self.outdir, self.name), self.pkg_data) | 93 | https://github.com/mkouhei/bootstrap-py/blob/95d56ed98ef409fd9f019dc352fd1c3711533275/bootstrap_py/package.py#L185-L187 | def group_experiments_greedy(tomo_expt: TomographyExperiment):
diag_sets = _max_tpb_overlap(tomo_expt)
grouped_expt_settings_list = list(diag_sets.values())
grouped_tomo_expt = TomographyExperiment(grouped_expt_settings_list, program=tomo_expt.program)
return grouped_tomo_expt |
Finds the location of the current Steam installation on Windows machines.
Returns None for any non-Windows machines, or for Windows machines where
Steam is not installed. | def find_steam_location():
if registry is None:
return None
key = registry.CreateKey(registry.HKEY_CURRENT_USER,"Software\Valve\Steam")
return registry.QueryValueEx(key,"SteamPath")[0] | 94 | https://github.com/scottrice/pysteam/blob/1eb2254b5235a053a953e596fa7602d0b110245d/pysteam/winutils.py#L10-L20 | def _merge(*args):
return re.compile(r'^' + r'[/-]'.join(args) + r'(?:\s+' + _dow + ')?$') |
Plot PCoA principal coordinates scaled by the relative abundances of
otu_name. | def plot_PCoA(cat_data, otu_name, unifrac, names, colors, xr, yr, outDir,
save_as, plot_style):
fig = plt.figure(figsize=(14, 8))
ax = fig.add_subplot(111)
for i, cat in enumerate(cat_data):
plt.scatter(cat_data[cat]["pc1"], cat_data[cat]["pc2"], cat_data[cat]["size"],
... | 95 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/PCoA_bubble.py#L36-L65 | def reset(cls):
cls.debug = False
cls.disabled = False
cls.overwrite = False
cls.playback_only = False
cls.recv_timeout = 5
cls.recv_endmarkers = []
cls.recv_size = None |
Split up the column data in a biom table by mapping category value. | def split_by_category(biom_cols, mapping, category_id):
columns = defaultdict(list)
for i, col in enumerate(biom_cols):
columns[mapping[col['id']][category_id]].append((i, col))
return columns | 96 | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/transpose_biom.py#L17-L25 | def update_prompt(self):
prefix = ""
if self._local_endpoint is not None:
prefix += "(%s:%d) " % self._local_endpoint
prefix += self.engine.region
if self.engine.partial:
self.prompt = len(prefix) * " " + "> "
else:
self.prompt = prefix + "> " |
print line if starts with ... | def print_line(l):
print_lines = ['# STOCKHOLM', '#=GF', '#=GS', ' ']
if len(l.split()) == 0:
return True
for start in print_lines:
if l.startswith(start):
return True
return False | 97 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/stockholm2oneline.py#L11-L21 | def setOverlayTransformTrackedDeviceRelative(self, ulOverlayHandle, unTrackedDevice):
fn = self.function_table.setOverlayTransformTrackedDeviceRelative
pmatTrackedDeviceToOverlayTransform = HmdMatrix34_t()
result = fn(ulOverlayHandle, unTrackedDevice, byref(pmatTrackedDeviceToOverlayTransform))... |
convert stockholm to single line format | def stock2one(stock):
lines = {}
for line in stock:
line = line.strip()
if print_line(line) is True:
yield line
continue
if line.startswith('//'):
continue
ID, seq = line.rsplit(' ', 1)
if ID not in lines:
lines[ID] = ''
... | 98 | https://github.com/christophertbrown/bioscripts/blob/83b2566b3a5745437ec651cd6cafddd056846240/ctbBio/stockholm2oneline.py#L23-L44 | def describe_event_source_mapping(UUID=None, EventSourceArn=None,
FunctionName=None,
region=None, key=None, keyid=None, profile=None):
ids = _get_ids(UUID, EventSourceArn=EventSourceArn,
FunctionName=FunctionName)
if not ids... |
Statics the methods. wut. | def math_func(f):
@wraps(f)
def wrapper(*args, **kwargs):
if len(args) > 0:
return_type = type(args[0])
if kwargs.has_key('return_type'):
return_type = kwargs['return_type']
kwargs.pop('return_type')
return return_type(f(*args, **kwargs))
a... | 99 | https://github.com/elbow-jason/Uno-deprecated/blob/4ad07d7b84e5b6e3e2b2c89db69448906f24b4e4/uno/helpers.py#L8-L22 | def PublishMultipleEvents(cls, events, token=None):
event_name_map = registry.EventRegistry.EVENT_NAME_MAP
for event_name, messages in iteritems(events):
if not isinstance(event_name, string_types):
raise ValueError(
"Event names should be string, got: %s" % type(event_name))
for... |
End of preview.
No dataset card yet
- Downloads last month
- 3