code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def _get_name_map(saltenv='base'): u_name_map = {} name_map = get_repo_data(saltenv).get('name_map', {}) if not six.PY2: return name_map for k in name_map: u_name_map[k] = name_map[k] return u_name_map
Return a reverse map of full pkg names to the names recognized by winrepo.
def contribute_to_class(self, cls, name): super(StateField, self).contribute_to_class(cls, name) parent_property = getattr(cls, self.name, None) setattr(cls, self.name, StateFieldProperty(self, parent_property))
Contribute the state to a Model. Attaches a StateFieldProperty to wrap the attribute.
def total_accessibility(in_rsa, path=True): if path: with open(in_rsa, 'r') as inf: rsa = inf.read() else: rsa = in_rsa[:] all_atoms, side_chains, main_chain, non_polar, polar = [ float(x) for x in rsa.splitlines()[-1].split()[1:]] return all_atoms, side_chains, main_...
Parses rsa file for the total surface accessibility data. Parameters ---------- in_rsa : str Path to naccess rsa file. path : bool Indicates if in_rsa is a path or a string. Returns ------- dssp_residues : 5-tuple(float) Total accessibility values for: [0] a...
def submit(ctx_name, parent_id, name, url, func, *args, **kwargs): if isinstance(ctx_name, Context): ctx = ctx_name else: ctx = ctxs.get(ctx_name, ctxs[ctx_default]) return _submit(ctx, parent_id, name, url, func, *args, **kwargs)
Submit through a context Parameters ---------- ctx_name : str The name of the context to submit through parent_id : str The ID of the group that the job is a part of. name : str The name of the job url : str The handler that can take the results (e.g., /beta_dive...
def get_connection_cls(cls): if cls.__connection_cls is None: cls.__connection_cls, _ = cls.from_settings() return cls.__connection_cls
Return connection class. :rtype: :class:`type`
def _find_relation_factory(module): if not module: return None candidates = [o for o in (getattr(module, attr) for attr in dir(module)) if (o is not RelationFactory and o is not RelationBase and isclass(o) and issubclass...
Attempt to find a RelationFactory subclass in the module. Note: RelationFactory and RelationBase are ignored so they may be imported to be used as base classes without fear.
def DbGetPropertyHist(self, argin): self._log.debug("In DbGetPropertyHist()") object_name = argin[0] prop_name = argin[1] return self.db.get_property_hist(object_name, prop_name)
Retrieve object property history :param argin: Str[0] = Object name Str[2] = Property name :type: tango.DevVarStringArray :return: Str[0] = Property name Str[1] = date Str[2] = Property value number (array case) Str[3] = Property value 1 Str[n] = Propert...
def trace_set_format(self, fmt): cmd = enums.JLinkTraceCommand.SET_FORMAT data = ctypes.c_uint32(fmt) res = self._dll.JLINKARM_TRACE_Control(cmd, ctypes.byref(data)) if (res == 1): raise errors.JLinkException('Failed to set trace format.') return None
Sets the format for the trace buffer to use. Args: self (JLink): the ``JLink`` instance. fmt (int): format for the trace buffer; this is one of the attributes of ``JLinkTraceFormat``. Returns: ``None``
def clear_errors(): data = [] data.append(0x0B) data.append(BROADCAST_ID) data.append(RAM_WRITE_REQ) data.append(STATUS_ERROR_RAM) data.append(BYTE2) data.append(0x00) data.append(0x00) send_data(data)
Clears the errors register of all Herkulex servos Args: none
def OnGetItemText(self, item, col): try: column = self.columns[col] value = column.get(self.sorted[item]) except IndexError, err: return None else: if value is None: return u'' if column.percentPossible and self.percenta...
Retrieve text for the item and column respectively
def _insert_html_configs(c, *, project_name, short_project_name): c['templates_path'] = [ '_templates', lsst_sphinx_bootstrap_theme.get_html_templates_path()] c['html_theme'] = 'lsst_sphinx_bootstrap_theme' c['html_theme_path'] = [lsst_sphinx_bootstrap_theme.get_html_theme_path()] c['htm...
Insert HTML theme configurations.
def memory(self): if self._memory is not None: return self._memory elif self._config is not None: return self._config.defaultMemory else: raise AttributeError("Default value for 'memory' cannot be determined")
The maximum number of bytes of memory the job will require to run.
async def cli(self): print('Enter commands and press enter') print('Type help for help and exit to quit') while True: command = await _read_input(self.loop, 'pyatv> ') if command.lower() == 'exit': break elif command == 'cli': p...
Enter commands in a simple CLI.
def Mx(mt, x): n = len(mt.Cx) sum1 = 0 for j in range(x, n): k = mt.Cx[j] sum1 += k return sum1
Return the Mx
def LMLgrad(self,params=None): if params is not None: self.setParams(params) KV = self._update_cache() W = KV['W'] LMLgrad = SP.zeros(self.covar.n_params) for i in range(self.covar.n_params): Kd = self.covar.Kgrad_param(i) LMLgrad[i] = 0.5 * (W...
evaluates the gradient of the log marginal likelihood for the given hyperparameters
def save_hdf(self,filename,path=''): self.orbpop_long.save_hdf(filename,'{}/long'.format(path)) self.orbpop_short.save_hdf(filename,'{}/short'.format(path))
Save to .h5 file.
def jtag_send(self, tms, tdi, num_bits): if not util.is_natural(num_bits) or num_bits <= 0 or num_bits > 32: raise ValueError('Number of bits must be >= 1 and <= 32.') self._dll.JLINKARM_StoreBits(tms, tdi, num_bits) return None
Sends data via JTAG. Sends data via JTAG on the rising clock edge, TCK. At on each rising clock edge, on bit is transferred in from TDI and out to TDO. The clock uses the TMS to step through the standard JTAG state machine. Args: self (JLink): the ``JLink`` instance ...
def _setup_firefox(self, capabilities): if capabilities.get("marionette"): gecko_driver = self.config.get('Driver', 'gecko_driver_path') self.logger.debug("Gecko driver path given in properties: %s", gecko_driver) else: gecko_driver = None firefox_binary = sel...
Setup Firefox webdriver :param capabilities: capabilities object :returns: a new local Firefox driver
def _find_protocol_error(tb, proto_name): tb_info = traceback.extract_tb(tb) for frame in reversed(tb_info): if frame.filename == proto_name: return frame else: raise KeyError
Return the FrameInfo for the lowest frame in the traceback from the protocol.
async def connect(url, *, apikey=None, insecure=False): url = api_url(url) url = urlparse(url) if url.username is not None: raise ConnectError( "Cannot provide user-name explicitly in URL (%r) when connecting; " "use login instead." % url.username) if url.password is not ...
Connect to a remote MAAS instance with `apikey`. Returns a new :class:`Profile` which has NOT been saved. To connect AND save a new profile:: profile = connect(url, apikey=apikey) profile = profile.replace(name="mad-hatter") with profiles.ProfileStore.open() as config: con...
def write_int32(self, value, little_endian=True): if little_endian: endian = "<" else: endian = ">" return self.pack('%si' % endian, value)
Pack the value as a signed integer and write 4 bytes to the stream. Args: value: little_endian (bool): specify the endianness. (Default) Little endian. Returns: int: the number of bytes written.
def ikev2scan(ip, **kwargs): return sr(IP(dst=ip) / UDP() / IKEv2(init_SPI=RandString(8), exch_type=34) / IKEv2_payload_SA(prop=IKEv2_payload_Proposal()), **kwargs)
Send a IKEv2 SA to an IP and wait for answers.
def resume(self, container_id=None, sudo=None): return self._state_command(container_id, command='resume', sudo=sudo)
resume a stopped OciImage container, if it exists Equivalent command line example: singularity oci resume <container_ID> Parameters ========== container_id: the id to stop. sudo: Add sudo to the command. If the container was created by root, ...
def get_meta_graph_copy(self, tags=None): meta_graph = self.get_meta_graph(tags) copy = tf_v1.MetaGraphDef() copy.CopyFrom(meta_graph) return copy
Returns a copy of a MetaGraph with the identical set of tags.
def decompress_file(filepath): toks = filepath.split(".") file_ext = toks[-1].upper() from monty.io import zopen if file_ext in ["BZ2", "GZ", "Z"]: with open(".".join(toks[0:-1]), 'wb') as f_out, \ zopen(filepath, 'rb') as f_in: f_out.writelines(f_in) os.remov...
Decompresses a file with the correct extension. Automatically detects gz, bz2 or z extension. Args: filepath (str): Path to file. compression (str): A compression mode. Valid options are "gz" or "bz2". Defaults to "gz".
def len2dlc(length): if length <= 8: return length for dlc, nof_bytes in enumerate(CAN_FD_DLC): if nof_bytes >= length: return dlc return 15
Calculate the DLC from data length. :param int length: Length in number of bytes (0-64) :returns: DLC (0-15) :rtype: int
def _traverse_report(data): if 'items' not in data: return {} out = {} for item in data['items']: skip = (item['severity'] == 'NonDisplay' or item['itemKey'] == 'categoryDesc' or item['value'] in [None, 'Null', 'N/A', 'NULL']) if skip: cont...
Recursively traverse vehicle health report.
def temperature(self): result = self.i2c_read(2) value = struct.unpack('>H', result)[0] if value < 32768: return value / 256.0 else: return (value - 65536) / 256.0
Get the temperature in degree celcius
async def open_wallet_search(wallet_handle: int, type_: str, query_json: str, options_json: str) -> int: logger = logging.getLogger(__name__) logger.debug("open_wallet_search: >>> wallet_handle: %r, type_: %r, query_json: %r,...
Search for wallet records :param wallet_handle: wallet handler (created by open_wallet). :param type_: allows to separate different record types collections :param query_json: MongoDB style query to wallet record tags: { "tagName": "tagValue", $or: { "tagName2": { $regex: 'p...
def parse(path): doc = ET.parse(path).getroot() channel = doc.find("./channel") blog = _parse_blog(channel) authors = _parse_authors(channel) categories = _parse_categories(channel) tags = _parse_tags(channel) posts = _parse_posts(channel) return { "blog": blog, "authors"...
Parses xml and returns a formatted dict. Example: wpparser.parse("./blog.wordpress.2014-09-26.xml") Will return: { "blog": { "tagline": "Tagline", "site_url": "http://marteinn.se/blog", "blog_url": "http://marteinn.se/blog", "language":...
def parse_url_rules(urls_fp): url_rules = [] for line in urls_fp: re_url = line.strip() if re_url: url_rules.append({'str': re_url, 're': re.compile(re_url)}) return url_rules
URL rules from given fp
def sample_counters(mc, system_info): return { (x, y): mc.get_router_diagnostics(x, y) for (x, y) in system_info }
Sample every router counter in the machine.
def Set(self, name, initial=None): return types.Set(name, self.api, initial)
The set datatype. :param name: The name of the set. :keyword initial: Initial members of the set. See :class:`redish.types.Set`.
def check_wide_data_for_blank_choices(choice_col, wide_data): if wide_data[choice_col].isnull().any(): msg_1 = "One or more of the values in wide_data[choice_col] is null." msg_2 = " Remove null values in the choice column or fill them in." raise ValueError(msg_1 + msg_2) return None
Checks `wide_data` for null values in the choice column, and raises a helpful ValueError if null values are found. Parameters ---------- choice_col : str. Denotes the column in `wide_data` that is used to record each observation's choice. wide_data : pandas dataframe. Contai...
def safe_mkdir(directory, clean=False): if clean: safe_rmtree(directory) try: os.makedirs(directory) except OSError as e: if e.errno != errno.EEXIST: raise
Safely create a directory. Ensures a directory is present. If it's not there, it is created. If it is, it's a no-op. If clean is True, ensures the directory is empty.
def loadJSON(self, jdata): self.__name = jdata['name'] self.__field = jdata['field'] self.__display = jdata.get('display') or self.__display self.__flags = jdata.get('flags') or self.__flags self.__defaultOrder = jdata.get('defaultOrder') or self.__defaultOrder self.__def...
Initializes the information for this class from the given JSON data blob. :param jdata: <dict>
def define_page_breakpoint(self, dwProcessId, address, pages = 1, condition = True, action = None): process = self.system.get_process(dwProcessId) bp = PageBreakpoint(a...
Creates a disabled page breakpoint at the given address. @see: L{has_page_breakpoint}, L{get_page_breakpoint}, L{enable_page_breakpoint}, L{enable_one_shot_page_breakpoint}, L{disable_page_breakpoint}, L{erase_page_breakpoint} @ty...
def deleteRole(self, roleID): url = self._url + "/%s/delete" % roleID params = { "f" : "json" } return self._post(url=url, param_dict=params, proxy_url=self._proxy_url, proxy_port=self._pro...
deletes a role by ID
def run_latex_report(base, report_dir, section_info): out_name = "%s_recal_plots.tex" % base out = os.path.join(report_dir, out_name) with open(out, "w") as out_handle: out_tmpl = Template(out_template) out_handle.write(out_tmpl.render(sections=section_info)) start_dir = os.getcwd() ...
Generate a pdf report with plots using latex.
def add_path(prev: Optional[ResponsePath], key: Union[str, int]) -> ResponsePath: return ResponsePath(prev, key)
Add a key to a response path. Given a ResponsePath and a key, return a new ResponsePath containing the new key.
def path_for_doc(self, doc_id): full_path = self.path_for_doc_fn(self.repo, doc_id) return full_path
Returns doc_dir and doc_filepath for doc_id.
def snapshot_share(self, share_name, metadata=None, quota=None, timeout=None): _validate_not_none('share_name', share_name) request = HTTPRequest() request.method = 'PUT' request.host_locations = self._get_host_locations() request.path = _get_path(share_name) request.quer...
Creates a snapshot of an existing share under the specified account. :param str share_name: The name of the share to create a snapshot of. :param metadata: A dict with name_value pairs to associate with the share as metadata. Example:{'Category':'test'} :type...
def track_child(self, child, logical_block_size, allow_duplicate=False): if not self._initialized: raise pycdlibexception.PyCdlibInternalError('Directory Record not yet initialized') self._add_child(child, logical_block_size, allow_duplicate, False)
A method to track an existing child of this directory record. Parameters: child - The child directory record object to add. logical_block_size - The size of a logical block for this volume descriptor. allow_duplicate - Whether to allow duplicate names, as there are ...
def _load_table(self, name): table = self._tables.get(name, None) if table is not None: return table if not self.engine.has_table(name): raise BindingException('Table does not exist: %r' % name, table=name) table = Table(name, se...
Reflect a given table from the database.
def symbols_to_prob(symbols): myCounter = Counter(symbols) N = float(len(list(symbols))) for k in myCounter: myCounter[k] /= N return myCounter
Return a dict mapping symbols to probability. input: ----- symbols: iterable of hashable items works well if symbols is a zip of iterables
def _handle_poll(self, relpath, params): request = json.loads(params.get('q')[0]) ret = {} for poll in request: _id = poll.get('id', None) path = poll.get('path', None) pos = poll.get('pos', 0) if path: abspath = os.path.normpath(os.path.join(self._root, path)) if os....
Handle poll requests for raw file contents.
def validate_quantity(self, value): if not isinstance(value, pq.quantity.Quantity): self._error('%s' % value, "Must be a Python quantity.")
Validate that the value is of the `Quantity` type.
def system_update_keyspace(self, ks_def): self._seqid += 1 d = self._reqs[self._seqid] = defer.Deferred() self.send_system_update_keyspace(ks_def) return d
updates properties of a keyspace. returns the new schema id. Parameters: - ks_def
def get_service_status(self, name): svc = self._query_service(name) if svc is not None: return {'name': name, 'status': self.parse_query(svc['output']), 'output': svc['output'] } else: return {'name': name, ...
Returns the status for the given service name along with the output of the query command
def stat_smt_query(func: Callable): stat_store = SolverStatistics() def function_wrapper(*args, **kwargs): if not stat_store.enabled: return func(*args, **kwargs) stat_store.query_count += 1 begin = time() result = func(*args, **kwargs) end = time() st...
Measures statistics for annotated smt query check function
def git_ls_files(*cmd_args): cmd = ['git', 'ls-files'] cmd.extend(cmd_args) return set(subprocess.check_output(cmd).splitlines())
Run ``git ls-files`` in the top-level project directory. Arguments go directly to execution call. :return: set of file names :rtype: :class:`set`
def info_gain(current_impurity, true_branch, false_branch, criterion): measure_impurity = gini_impurity if criterion == "gini" else entropy p = float(len(true_branch)) / (len(true_branch) + len(false_branch)) return current_impurity - p * measure_impurity(true_branch) - (1 - p) * measure_impurity(false_bran...
Information Gain. The uncertainty of the starting node, minus the weighted impurity of two child nodes.
def _error(self, request, status, headers={}, prefix_template_path=False, **kwargs): return self._render( request = request, template = str(status), status = status, context = { 'error': kwargs }, headers = headers, ...
Convenience method to render an error response. The template is inferred from the status code. :param request: A django.http.HttpRequest instance. :param status: An integer describing the HTTP status code to respond with. :param headers: A dictionary describing HTTP headers. :param pref...
def end_index(self): paginator = self.paginator if self.number == paginator.num_pages: return paginator.count return (self.number - 1) * paginator.per_page + paginator.first_page
Return the 1-based index of the last item on this page.
def to_xarray(input): from climlab.domain.field import Field if isinstance(input, Field): return Field_to_xarray(input) elif isinstance(input, dict): return state_to_xarray(input) else: raise TypeError('input must be Field object or dictionary of Field objects')
Convert climlab input to xarray format. If input is a climlab.Field object, return xarray.DataArray If input is a dictionary (e.g. process.state or process.diagnostics), return xarray.Dataset object with all spatial axes, including 'bounds' axes indicating cell boundaries in each spatial dimension. ...
def getsourcefile(object): filename = getfile(object) if string.lower(filename[-4:]) in ['.pyc', '.pyo']: filename = filename[:-4] + '.py' for suffix, mode, kind in imp.get_suffixes(): if 'b' in mode and string.lower(filename[-len(suffix):]) == suffix: return None if os.path....
Return the Python source file an object was defined in, if it exists.
def facets_normal(self): if len(self.facets) == 0: return np.array([]) area_faces = self.area_faces index = np.array([i[area_faces[i].argmax()] for i in self.facets]) normals = self.face_normals[index] origins = self.vertices[self.faces[:, 0]...
Return the normal of each facet Returns --------- normals: (len(self.facets), 3) float A unit normal vector for each facet
def GetListSelect(selectList, title="Select", msg=""): root = tkinter.Tk() root.title(title) label = tkinter.Label(root, text=msg) label.pack() listbox = tkinter.Listbox(root) for i in selectList: listbox.insert(tkinter.END, i) listbox.pack() tkinter.Button(root, text="OK", fg="b...
Create list with selectList, and then return seleced string and index title: Window name mag: Label of the list return (seldctedItem, selectedindex)
def file_identifier(self): if not self._initialized: raise pycdlibexception.PyCdlibInternalError('UDF File Entry not initialized') if self.file_ident is None: return b'/' return self.file_ident.fi
A method to get the name of this UDF File Entry as a byte string. Parameters: None. Returns: The UDF File Entry as a byte string.
def wait_for_tx(self, tx, max_seconds=120): tx_hash = None if isinstance(tx, (str, UInt256)): tx_hash = str(tx) elif isinstance(tx, Transaction): tx_hash = tx.Hash.ToString() else: raise AttributeError("Supplied tx is type '%s', but must be Transaction or UInt256 or str" % type(t...
Wait for tx to show up on blockchain Args: tx (Transaction or UInt256 or str): Transaction or just the hash max_seconds (float): maximum seconds to wait for tx to show up. default: 120 Returns: True: if transaction was found Raises: AttributeError: if supplied tx is not Tr...
def complete_object_value( self, return_type: GraphQLObjectType, field_nodes: List[FieldNode], info: GraphQLResolveInfo, path: ResponsePath, result: Any, ) -> AwaitableOrValue[Dict[str, Any]]: if return_type.is_type_of: is_type_of = return_type.is_...
Complete an Object value by executing all sub-selections.
def update(self): url = self.baseurl + '/_status?format=xml' response = self.s.get(url) response.raise_for_status() from xml.etree.ElementTree import XML root = XML(response.text) for serv_el in root.iter('service'): serv = Monit.Service(self, serv_el) ...
Update Monit deamon and services status.
def cloud_front_origin_access_identity_exists(Id, region=None, key=None, keyid=None, profile=None): authargs = {'region': region, 'key': key, 'keyid': keyid, 'profile': profile} oais = list_cloud_front_origin_access_identities(**authargs) or [] return bool([i['Id'] for i in oais if i['Id'] == Id])
Return True if a CloudFront origin access identity exists with the given Resource ID or False otherwise. Id Resource ID of the CloudFront origin access identity. region Region to connect to. key Secret key to use. keyid Access key to use. profile Dict...
def _call(self, x, out=None): if out is None: out = self.range.zero() for i, j, op in zip(self.ops.row, self.ops.col, self.ops.data): out[i] += op(x[j]) else: has_evaluated_row = np.zeros(len(self.range), dtype=bool) for i, j, op in zip(sel...
Call the operators on the parts of ``x``.
def credit(self, amount, debit_account, description, debit_memo="", credit_memo="", datetime=None): assert amount >= 0 return self.post(-amount, debit_account, description, self_memo=credit_memo, other_memo=debit_memo, datetime=datetime)
Post a credit of 'amount' and a debit of -amount against this account and credit_account respectively. note amount must be non-negative.
def generate(self, labels, split_idx): atom_labels = [label[0] for label in labels] noise = [] distribution_function = distributions[self.distribution_name]["function"] for label in atom_labels: params = [self.parameters["{}_{}".format(label, param)][split_idx] ...
Generate peak-specific noise abstract method, must be reimplemented in a subclass. :param tuple labels: Dimension labels of a peak. :param int split_idx: Index specifying which peak list split parameters to use. :return: List of noise values for dimensions ordered as they appear in a peak. ...
def wrap_case_result(raw, expr): raw_1d = np.atleast_1d(raw) if np.any(pd.isnull(raw_1d)): result = pd.Series(raw_1d) else: result = pd.Series( raw_1d, dtype=constants.IBIS_TYPE_TO_PANDAS_TYPE[expr.type()] ) if result.size == 1 and isinstance(expr, ir.ScalarExpr): ...
Wrap a CASE statement result in a Series and handle returning scalars. Parameters ---------- raw : ndarray[T] The raw results of executing the ``CASE`` expression expr : ValueExpr The expression from the which `raw` was computed Returns ------- Union[scalar, Series]
def syndic_cmd(self, data): if 'tgt_type' not in data: data['tgt_type'] = 'glob' kwargs = {} for field in ('master_id', 'user', ): if field in data: kwargs[field] = data[field] def timeout_handler(*args):...
Take the now clear load and forward it on to the client cmd
def get_method(name): name = _format_name(name) try: return METHODS[name] except KeyError as exc: exc.args = ("no PSD method registered with name {0!r}".format(name),) raise
Return the PSD method registered with the given name.
def get_app_index_dashboard(context): app = context['app_list'][0] model_list = [] app_label = None app_title = app['name'] admin_site = get_admin_site(context=context) for model, model_admin in admin_site._registry.items(): if app['app_label'] == model._meta.app_label: split...
Returns the admin dashboard defined by the user or the default one.
async def wait_for_connection_lost(self) -> bool: if not self.connection_lost_waiter.done(): try: await asyncio.wait_for( asyncio.shield(self.connection_lost_waiter), self.close_timeout, loop=self.loop, ) ...
Wait until the TCP connection is closed or ``self.close_timeout`` elapses. Return ``True`` if the connection is closed and ``False`` otherwise.
def dct2(input, K=13): nframes, N = input.shape freqstep = numpy.pi / N cosmat = dctmat(N,K,freqstep,False) return numpy.dot(input, cosmat) * (2.0 / N)
Convert log-power-spectrum to MFCC using the normalized DCT-II
def list_websites(self): self.connect() results = self.server.list_websites(self.session_id) return results
Return all websites, name is not a key
def handle_single_request(self, request_object): if not isinstance(request_object, (MethodCall, Notification)): raise TypeError("Invalid type for request_object") method_name = request_object.method_name params = request_object.params req_id = request_object.id reques...
Handles a single request object and returns the raw response :param request_object:
def start(host, port=5959, tag='salt/engine/logstash', proto='udp'): if proto == 'tcp': logstashHandler = logstash.TCPLogstashHandler elif proto == 'udp': logstashHandler = logstash.UDPLogstashHandler logstash_logger = logging.getLogger('python-logstash-logger') logstash_logger.setLevel(...
Listen to salt events and forward them to logstash
def scale_calculator(multiplier, elements, rescale=None): if isinstance(elements, list): unique_elements = list(set(elements)) scales = {} for x in unique_elements: count = elements.count(x) scales[x] = multiplier * count elif isinstance(elements, dict): s...
Get a dictionary of scales for each element in elements. Examples: >>> scale_calculator(1, [2,7,8]) {8: 1, 2: 1, 7: 1} >>> scale_calculator(1, [2,2,2,3,4,5,5,6,7,8]) {2: 3, 3: 1, 4: 1, 5: 2, 6: 1, 7: 1, 8: 1} >>> scale_calculator(1, [2,2,2,3,4,5,5,6,7,8], rescale=(0.5,1)) ...
def display(table, limit=0, vrepr=None, index_header=None, caption=None, tr_style=None, td_styles=None, encoding=None, truncate=None, epilogue=None): from IPython.core.display import display_html html = _display_html(table, limit=limit, vrepr=vrepr, index_header=...
Display a table inline within an IPython notebook.
def as_translation_key(self): return TranslationKey(**{ name: getattr(self, name) for name in TranslationKey._fields})
Project Translation object or any other derived class into just a TranslationKey, which has fewer fields and can be used as a dictionary key.
def _get_item_from_search_response(self, response, type_): sections = sorted(response['sections'], key=lambda sect: sect['type'] == type_, reverse=True) for section in sections: hits = [hit for hit in section['hits'] if hit['type'] == type_...
Returns either a Song or Artist result from search_genius_web
def output(self, context, *args, **kwargs): output_fields = self.output_fields output_type = self.output_type if output_fields and output_type: raise UnrecoverableError("Cannot specify both output_fields and output_type option.") if self.output_type: context.set_o...
Allow all readers to use eventually use output_fields XOR output_type options.
def get_by_name(self, name, style_type = None): for st in self.styles.values(): if st: if st.name == name: return st if style_type and not st: st = self.styles.get(self.default_styles[style_type], None) return st
Find style by it's descriptive name. :Returns: Returns found style of type :class:`ooxml.doc.Style`.
def synchronized(lock): @simple_decorator def wrap(function_target): def new_function(*args, **kw): lock.acquire() try: return function_target(*args, **kw) finally: lock.release() return new_function return wrap
Synchronization decorator. Allos to set a mutex on any function
def id_getter(self, relation_name, strict=False): def get_id(old_id): return self.get_new_id(relation_name, old_id, strict) return get_id
Returns a function that accepts an old_id and returns the new ID for the enclosed relation name.
def assign_issue(self, issue, assignee): url = self._options['server'] + \ '/rest/api/latest/issue/' + str(issue) + '/assignee' payload = {'name': assignee} r = self._session.put( url, data=json.dumps(payload)) raise_on_error(r) return True
Assign an issue to a user. None will set it to unassigned. -1 will set it to Automatic. :param issue: the issue ID or key to assign :type issue: int or str :param assignee: the user to assign the issue to :type assignee: str :rtype: bool
def deprecated(fn): @functools.wraps(fn) def wrapper(*args, **kwargs): warnings.warn(fn.__doc__.split('\n')[0], category=DeprecationWarning, stacklevel=2) return fn(*args, **kwargs) return wrapper
Mark a function as deprecated and warn the user on use.
def sample_size_necessary_under_cph(power, ratio_of_participants, p_exp, p_con, postulated_hazard_ratio, alpha=0.05): def z(p): return stats.norm.ppf(p) m = ( 1.0 / ratio_of_participants * ((ratio_of_participants * postulated_hazard_ratio + 1.0) / (postulated_hazard_ratio - 1.0))...
This computes the sample size for needed power to compare two groups under a Cox Proportional Hazard model. Parameters ---------- power : float power to detect the magnitude of the hazard ratio as small as that specified by postulated_hazard_ratio. ratio_of_participants: ratio of participa...
def _create_state_data(self, context, resp_args, relay_state): if "name_id_policy" in resp_args and resp_args["name_id_policy"] is not None: resp_args["name_id_policy"] = resp_args["name_id_policy"].to_string().decode("utf-8") return {"resp_args": resp_args, "relay_state": relay_state}
Returns a dict containing the state needed in the response flow. :type context: satosa.context.Context :type resp_args: dict[str, str | saml2.samlp.NameIDPolicy] :type relay_state: str :rtype: dict[str, dict[str, str] | str] :param context: The current context :param re...
def omitted_parcov(self): if self.__omitted_parcov is None: self.log("loading omitted_parcov") self.__load_omitted_parcov() self.log("loading omitted_parcov") return self.__omitted_parcov
get the omitted prior parameter covariance matrix Returns ------- omitted_parcov : pyemu.Cov Note ---- returns a reference If ErrorVariance.__omitted_parcov is None, attribute is dynamically loaded
def get_info(self): field = self._current_field() if field: info = field.get_info() info['path'] = '%s/%s' % (self.name if self.name else '<no name>', info['path']) else: info = super(Container, self).get_info() return info
Get info regarding the current fuzzed enclosed node :return: info dictionary
def largest_connected_submatrix(C, directed=True, lcc=None): r if isdense(C): return sparse.connectivity.largest_connected_submatrix(csr_matrix(C), directed=directed, lcc=lcc).toarray() else: return sparse.connectivity.largest_connected_submatrix(C, directed=directed, lcc=lcc)
r"""Compute the count matrix on the largest connected set. Parameters ---------- C : scipy.sparse matrix Count matrix specifying edge weights. directed : bool, optional Whether to compute connected components for a directed or undirected graph. Default is True lcc : (M,) ndarr...
def word_wrap_tree(parented_tree, width=0): if width != 0: for i, leaf_text in enumerate(parented_tree.leaves()): dedented_text = textwrap.dedent(leaf_text).strip() parented_tree[parented_tree.leaf_treeposition(i)] = textwrap.fill(dedented_text, width=width) return parented_tree
line-wrap an NLTK ParentedTree for pretty-printing
def real_main(release_url=None, tests_json_path=None, upload_build_id=None, upload_release_name=None): coordinator = workers.get_coordinator() fetch_worker.register(coordinator) coordinator.start() data = open(FLAGS.tests_json_path).read() tests = load_tests...
Runs diff_my_images.
def populateFromRow(self, continuousSetRecord): self._filePath = continuousSetRecord.dataurl self.setAttributesJson(continuousSetRecord.attributes)
Populates the instance variables of this ContinuousSet from the specified DB row.
def fill_dcnm_subnet_info(self, tenant_id, subnet, start, end, gateway, sec_gateway, direc): serv_obj = self.get_service_obj(tenant_id) fw_dict = serv_obj.get_fw_dict() fw_id = fw_dict.get('fw_id') if direc == 'in': name = fw_id[0:4] + fw_const.I...
Fills the DCNM subnet parameters. Function that fills the subnet parameters for a tenant required by DCNM.
def export_model(model, model_type, export_dir, model_column_fn): wide_columns, deep_columns = model_column_fn() if model_type == 'wide': columns = wide_columns elif model_type == 'deep': columns = deep_columns else: columns = wide_columns + deep_columns feature_spec = tf.feature_column.make_parse...
Export to SavedModel format. Args: model: Estimator object model_type: string indicating model type. "wide", "deep" or "wide_deep" export_dir: directory to export the model. model_column_fn: Function to generate model feature columns.
def getTmpFilename(self, tmp_dir="/tmp",prefix='tmp',suffix='.fasta',\ include_class_id=False,result_constructor=FilePath): return super(Pplacer,self).getTmpFilename(tmp_dir=tmp_dir, prefix=prefix, suffix=suffix, ...
Define Tmp filename to contain .fasta suffix, since pplacer requires the suffix to be .fasta
def get_device_by_id(self, device_id): found_device = None for device in self.get_devices(): if device.device_id == device_id: found_device = device break if found_device is None: logger.debug('Did not find device with {}'.format(device_id)) ...
Search the list of connected devices by ID. device_id param is the integer ID of the device
def env(self): from copy import copy env = copy(self.doc.env) assert env is not None, 'Got a null execution context' env.update(self._envvar_env) env.update(self.all_props) return env
The execution context for rowprocessors and row-generating notebooks and functions.
def _get(self, url, query=None): if query is None: query = {} response = retry_request(self)(self._http_get)(url, query=query) if self.raw_mode: return response if response.status_code != 200: error = get_error(response) if self.raise_error...
Wrapper for the HTTP Request, Rate Limit Backoff is handled here, Responses are Processed with ResourceBuilder.
def warning (self, msg, pos=None): self.log(msg, 'warning: ' + self.location(pos))
Logs a warning message pertaining to the given SeqAtom.