text
stringlengths
81
112k
Applies contextual rules to the given list of tokens, where each token is a [word, tag] list. def apply(self, tokens): """ Applies contextual rules to the given list of tokens, where each token is a [word, tag] list. """ o = [("STAART", "STAART")] * 3 # Empty delimiters ...
Inserts a new rule that updates words with tag1 to tag2, given constraints x and y, e.g., Context.append("TO < NN", "VB") def insert(self, i, tag1, tag2, cmd="prevtag", x=None, y=None): """ Inserts a new rule that updates words with tag1 to tag2, given constraints x and y, e.g., Context...
Applies the named entity recognizer to the given list of tokens, where each token is a [word, tag] list. def apply(self, tokens): """ Applies the named entity recognizer to the given list of tokens, where each token is a [word, tag] list. """ # Note: we could also scan f...
Appends a named entity to the lexicon, e.g., Entities.append("Hooloovoo", "PERS") def append(self, entity, name="pers"): """ Appends a named entity to the lexicon, e.g., Entities.append("Hooloovoo", "PERS") """ e = map(lambda s: s.lower(), entity.split(" ") + [name]) ...
Returns a sorted list of keywords in the given string. def find_keywords(self, string, **kwargs): """ Returns a sorted list of keywords in the given string. """ return find_keywords(string, parser = self, top = kwargs.pop("top", 10), ...
Returns a list of sentences from the given string. Punctuation marks are separated from each word by a space. def find_tokens(self, string, **kwargs): """ Returns a list of sentences from the given string. Punctuation marks are separated from each word by a space. """ # ...
Annotates the given list of tokens with part-of-speech tags. Returns a list of tokens, where each token is now a [word, tag]-list. def find_tags(self, tokens, **kwargs): """ Annotates the given list of tokens with part-of-speech tags. Returns a list of tokens, where each token is now a ...
Annotates the given list of tokens with chunk tags. Several tags can be added, for example chunk + preposition tags. def find_chunks(self, tokens, **kwargs): """ Annotates the given list of tokens with chunk tags. Several tags can be added, for example chunk + preposition tags. ...
Takes a string (sentences) and returns a tagged Unicode string (TaggedString). Sentences in the output are separated by newlines. With tokenize=True, punctuation is split from words and sentences are separated by \n. With tags=True, part-of-speech tags are parsed (NN, VB, IN, ...). ...
Returns a list of sentences, where each sentence is a list of tokens, where each token is a list of word + tags. def split(self, sep=TOKENS): """ Returns a list of sentences, where each sentence is a list of tokens, where each token is a list of word + tags. """ if sep !...
Yields a list of tenses for this language, excluding negations. Each tense is a (tense, person, number, mood, aspect)-tuple. def TENSES(self): """ Yields a list of tenses for this language, excluding negations. Each tense is a (tense, person, number, mood, aspect)-tuple. """ ...
Returns the infinitive form of the given verb, or None. def lemma(self, verb, parse=True): """ Returns the infinitive form of the given verb, or None. """ if dict.__len__(self) == 0: self.load() if verb.lower() in self._inverse: return self._inverse[verb.lower()]...
Returns a list of all possible inflections of the given verb. def lexeme(self, verb, parse=True): """ Returns a list of all possible inflections of the given verb. """ a = [] b = self.lemma(verb, parse=parse) if b in self: a = [x for x in self[b] if x != ""] ...
Inflects the verb and returns the given tense (or None). For example: be - Verbs.conjugate("is", INFINITVE) => be - Verbs.conjugate("be", PRESENT, 1, SINGULAR) => I am - Verbs.conjugate("be", PRESENT, 1, PLURAL) => we are - Verbs.conjugate("be", PAST, 3, SINGU...
Returns a list of possible tenses for the given inflected verb. def tenses(self, verb, parse=True): """ Returns a list of possible tenses for the given inflected verb. """ verb = verb.lower() a = set() b = self.lemma(verb, parse=parse) v = [] if b in self: ...
Loads the XML-file (with sentiment annotations) from the given path. By default, Sentiment.path is lazily loaded. def load(self, path=None): """ Loads the XML-file (with sentiment annotations) from the given path. By default, Sentiment.path is lazily loaded. """ # <word ...
Returns a (polarity, subjectivity)-tuple for the given synset id. For example, the adjective "horrible" has id 193480 in WordNet: Sentiment.synset(193480, pos="JJ") => (-0.6, 1.0, 1.0). def synset(self, id, pos=ADJECTIVE): """ Returns a (polarity, subjectivity)-tuple for the given synse...
Returns a list of (chunk, polarity, subjectivity, label)-tuples for the given list of words: where chunk is a list of successive words: a known word optionally preceded by a modifier ("very good") or a negation ("not good"). def assessments(self, words=[], negation=True): """ Returns a ...
Annotates the given word with polarity, subjectivity and intensity scores, and optionally a semantic label (e.g., MOOD for emoticons, IRONY for "(!)"). def annotate(self, word, pos=None, polarity=0.0, subjectivity=0.0, intensity=1.0, label=None): """ Annotates the given word with polarity, subjecti...
Counts the words in the given string and saves the probabilities at the given path. This can be used to generate a new model for the Spelling() constructor. def train(self, s, path="spelling.txt"): """ Counts the words in the given string and saves the probabilities at the given path. T...
Returns a set of words with edit distance 1 from the given word. def _edit1(self, w): """ Returns a set of words with edit distance 1 from the given word. """ # Of all spelling errors, 80% is covered by edit distance 1. # Edit distance 1 = one character deleted, swapped, replaced or ins...
Returns a set of words with edit distance 2 from the given word def _edit2(self, w): """ Returns a set of words with edit distance 2 from the given word """ # Of all spelling errors, 99% is covered by edit distance 2. # Only keep candidates that are actually known words (20% speedup). ...
Return a list of (word, confidence) spelling corrections for the given word, based on the probability of known words with edit distance 1-2 from the given word. def suggest(self, w): """ Return a list of (word, confidence) spelling corrections for the given word, based on the probabilit...
Returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables (or default if too short). def zip(*args, **kwargs): """ Returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterab...
Returns a list of Chunk and Chink objects from the given sentence. Chink is a subclass of Chunk used for words that have Word.chunk == None (e.g., punctuation marks, conjunctions). def chunked(sentence): """ Returns a list of Chunk and Chink objects from the given sentence. Chink is a subcl...
Transforms the output of parse() into a Text object. The token parameter lists the order of tags in each token in the input string. def tree(string, token=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]): """ Transforms the output of parse() into a Text object. The token parameter lists the order of ta...
Returns the string with XML-safe special characters. def xml_encode(string): """ Returns the string with XML-safe special characters. """ string = string.replace("&", "&amp;") string = string.replace("<", "&lt;") string = string.replace(">", "&gt;") string = string.replace("\"","&quot;") st...
Returns the string with special characters decoded. def xml_decode(string): """ Returns the string with special characters decoded. """ string = string.replace("&amp;", "&") string = string.replace("&lt;", "<") string = string.replace("&gt;", ">") string = string.replace("&quot;","\"") st...
Returns the given Sentence object as an XML-string (plain bytestring, UTF-8 encoded). The tab delimiter is used as indendation for nested elements. The id can be used as a unique identifier per sentence for chunk id's and anchors. For example: "I eat pizza with a fork." => <sent...
Returns a slash-formatted string from the given XML representation. The return value is a TokenString (for MBSP) or TaggedString (for Pattern). def parse_string(xml): """ Returns a slash-formatted string from the given XML representation. The return value is a TokenString (for MBSP) or TaggedString...
Parses tokens from <word> elements in the given XML <chunk> element. Returns a flat list of tokens, in which each token is [WORD, POS, CHUNK, PNP, RELATION, ANCHOR, LEMMA]. If a <chunk type="PNP"> is encountered, traverses all of the chunks in the PNP. def _parse_tokens(chunk, format=[WORD, POS, CHUNK,...
Returns a string of the roles and relations parsed from the given <chunk> element. The chunk type (which is part of the relation string) can be given as parameter. def _parse_relation(chunk, type="O"): """ Returns a string of the roles and relations parsed from the given <chunk> element. The chunk ...
Returns a list of token tags parsed from the given <word> element. Tags that are not attributes in a <word> (e.g., relation) can be given as parameters. def _parse_token(word, chunk="O", pnp="O", relation="O", anchor="O", format=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]): """ Returns a ...
Returns an NLTK nltk.tree.Tree object from the given Sentence. The NLTK module should be on the search path somewhere. def nltk_tree(sentence): """ Returns an NLTK nltk.tree.Tree object from the given Sentence. The NLTK module should be on the search path somewhere. """ from nltk import tre...
Returns a dot-formatted string that can be visualized as a graph in GraphViz. def graphviz_dot(sentence, font="Arial", colors=BLUE): """ Returns a dot-formatted string that can be visualized as a graph in GraphViz. """ s = 'digraph sentence {\n' s += '\tranksep=0.75;\n' s += '\tnodesep=0.15;\n' ...
Returns a string where the tags of tokens in the sentence are organized in outlined columns. def table(sentence, fill=1, placeholder="-"): """ Returns a string where the tags of tokens in the sentence are organized in outlined columns. """ tags = [WORD, POS, IOB, CHUNK, ROLE, REL, PNP, ANCHOR, LEMMA] ...
Yields a list of all the token tags as they appeared when the word was parsed. For example: ["was", "VBD", "B-VP", "O", "VP-1", "A1", "be"] def tags(self): """ Yields a list of all the token tags as they appeared when the word was parsed. For example: ["was", "VBD", "B-VP", "O", "VP-1",...
Returns the next word in the sentence with the given type. def next(self, type=None): """ Returns the next word in the sentence with the given type. """ i = self.index + 1 s = self.sentence while i < len(s): if type in (s[i].type, None): return s[i] ...
Returns the next previous word in the sentence with the given type. def previous(self, type=None): """ Returns the next previous word in the sentence with the given type. """ i = self.index - 1 s = self.sentence while i > 0: if type in (s[i].type, None): ...
Yields the head of the chunk (usually, the last word in the chunk). def head(self): """ Yields the head of the chunk (usually, the last word in the chunk). """ if self.type == "NP" and any(w.type.startswith("NNP") for w in self): w = find(lambda w: w.type.startswith("NNP"), reversed...
Yields a list of all chunks in the sentence with the same relation id. def related(self): """ Yields a list of all chunks in the sentence with the same relation id. """ return [ch for ch in self.sentence.chunks if ch != self and intersects(unzip(0, ch.relations), unzip(0, s...
Yields the anchor tag as parsed from the original token. Chunks that are anchors have a tag with an "A" prefix (e.g., "A1"). Chunks that are PNP attachmens (or chunks inside a PNP) have "P" (e.g., "P1"). Chunks inside a PNP can be both anchor and attachment (e.g., "P1-A2"), ...
For verb phrases (VP), yields a list of the nearest adjectives and adverbs. def modifiers(self): """ For verb phrases (VP), yields a list of the nearest adjectives and adverbs. """ if self._modifiers is None: # Iterate over all the chunks and attach modifiers to their VP-anchor. ...
Returns the nearest chunk in the sentence with the given type. This can be used (for example) to find adverbs and adjectives related to verbs, as in: "the cat is ravenous" => is what? => "ravenous". def nearest(self, type="VP"): """ Returns the nearest chunk in the sentence with the giv...
Returns the next chunk in the sentence with the given type. def next(self, type=None): """ Returns the next chunk in the sentence with the given type. """ i = self.stop s = self.sentence while i < len(s): if s[i].chunk is not None and type in (s[i].chunk.type, None):...
Returns the next previous chunk in the sentence with the given type. def previous(self, type=None): """ Returns the next previous chunk in the sentence with the given type. """ i = self.start - 1 s = self.sentence while i > 0: if s[i].chunk is not None and type in (s...
Appends the next word to the sentence / chunk / preposition. For example: Sentence.append("clawed", "claw", "VB", "VP", role=None, relation=1) - word : the current word, - lemma : the canonical form of the word, - type : part-of-speech tag for the word (NN, JJ,...
Returns the arguments for Sentence.append() from a tagged token representation. The order in which token tags appear can be specified. The default order is (separated by slashes): - word, - part-of-speech, - (IOB-)chunk, - (IOB-)preposition, ...
Parses the chunk tag, role and relation id from the token relation tag. - VP => VP, [], [] - VP-1 => VP, [1], [None] - ADJP-PRD => ADJP, [None], [PRD] - NP-SBJ-1 => NP, [1], [SBJ] - NP-OBJ-1*NP-OBJ-2 => NP, [1,2], ...
Adds a new Word to the sentence. Other Sentence._do_[tag] functions assume a new word has just been appended. def _do_word(self, word, lemma=None, type=None): """ Adds a new Word to the sentence. Other Sentence._do_[tag] functions assume a new word has just been appended. """ ...
Adds a new Chunk to the sentence, or adds the last word to the previous chunk. The word is attached to the previous chunk if both type and relation match, and if the word's chunk tag does not start with "B-" (i.e., iob != BEGIN). Punctuation marks (or other "O" chunk tags) are not ch...
Attaches subjects, objects and verbs. If the previous chunk is a subject/object/verb, it is stored in Sentence.relations{}. def _do_relation(self): """ Attaches subjects, objects and verbs. If the previous chunk is a subject/object/verb, it is stored in Sentence.relations{}. """...
Attaches prepositional noun phrases. Identifies PNP's from either the PNP tag or the P-attachment tag. This does not determine the PP-anchor, it only groups words in a PNP chunk. def _do_pnp(self, pnp, anchor=None): """ Attaches prepositional noun phrases. Identifies PNP's f...
Collects preposition anchors and attachments in a dictionary. Once the dictionary has an entry for both the anchor and the attachment, they are linked. def _do_anchor(self, anchor): """ Collects preposition anchors and attachments in a dictionary. Once the dictionary has an entry for bo...
Attach conjunctions. CC-words like "and" and "or" between two chunks indicate a conjunction. def _do_conjunction(self, _and=("and", "e", "en", "et", "und", "y")): """ Attach conjunctions. CC-words like "and" and "or" between two chunks indicate a conjunction. """ w = sel...
Returns a tag for the word at the given index. The tag can be WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHOR or a custom word tag. def get(self, index, tag=LEMMA): """ Returns a tag for the word at the given index. The tag can be WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHO...
Iterates over the tags in the entire Sentence, For example, Sentence.loop(POS, LEMMA) yields tuples of the part-of-speech tags and lemmata. Possible tags: WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHOR or a custom word tag. Any order or combination of tags can be supplied. de...
Returns the indices of tokens in the sentence where the given token tag equals the string. The string can contain a wildcard "*" at the end (this way "NN*" will match "NN" and "NNS"). The tag can be WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHOR or a custom word tag. For exampl...
Returns a portion of the sentence from word start index to word stop index. The returned slice is a subclass of Sentence and a deep copy. def slice(self, start, stop): """ Returns a portion of the sentence from word start index to word stop index. The returned slice is a subclass of Sen...
Returns an in-order list of mixed Chunk and Word objects. With pnp=True, also contains PNPChunk objects whenever possible. def constituents(self, pnp=False): """ Returns an in-order list of mixed Chunk and Word objects. With pnp=True, also contains PNPChunk objects whenever possible. ...
Returns a new Text from the given XML string. def from_xml(cls, xml): """ Returns a new Text from the given XML string. """ s = parse_string(xml) return Sentence(s.split("\n")[0], token=s.tags, language=s.language)
Yields the sentence as an XML-formatted string (plain bytestring, UTF-8 encoded). All the sentences in the XML are wrapped in a <text> element. def xml(self): """ Yields the sentence as an XML-formatted string (plain bytestring, UTF-8 encoded). All the sentences in the XML are wrapped i...
Extract credits from `AUTHORS.rst` def get_credits(): """Extract credits from `AUTHORS.rst`""" credits = read(os.path.join(_HERE, "AUTHORS.rst")).split("\n") from_index = credits.index("Active Contributors") credits = "\n".join(credits[from_index + 2:]) return credits
Converts ``rst`` to **markdown_github**, using :program:`pandoc` **Input** * ``FILE.rst`` **Output** * ``FILE.md`` def rst2markdown_github(path_to_rst, path_to_md, pandoc="pandoc"): """ Converts ``rst`` to **markdown_github**, using :program:`pandoc` **Input** * ``FILE...
Extract HELP information from ``<program> -h | --help`` message **Input** * ``$ <program> -h | --help`` * ``$ cd <cwd> && make help`` **Output** * ``docs/src/console_help_xy.rst`` def console_help2rst(cwd, help_cmd, path_to_rst, rst_title, format_as_code=False):...
Update documentation (ready for publishing new release) Usually called by ``make docs`` :param bool make_doc: generate DOC page from Makefile help messages def update_docs(readme=True, makefiles=True): """Update documentation (ready for publishing new release) Usually called by ``make docs`` :p...
Returns root mean square value of f(x, params) def rms(self, x, params=()): """ Returns root mean square value of f(x, params) """ internal_x, internal_params = self.pre_process(np.asarray(x), np.asarray(params)) if internal_params.ndim > 1...
Solve system for a set of parameters in which one is varied Parameters ---------- x0 : array_like Guess (subject to ``self.post_processors``) params : array_like Parameter values vaired_data : array_like Numerical values of the varied paramete...
Plots the results from :meth:`solve_series`. Parameters ---------- xres : array Of shape ``(varied_data.size, self.nx)``. varied_data : array See :meth:`solve_series`. varied_idx : int or str See :meth:`solve_series`. \\*\\*kwargs : ...
Analogous to :meth:`plot_series` but will plot residuals. def plot_series_residuals(self, xres, varied_data, varied_idx, params, **kwargs): """ Analogous to :meth:`plot_series` but will plot residuals. """ nf = len(self.f_cb(*self.pre_process(xres[0], params))) xerr = np.empty((xres.shape[0], n...
Analogous to :meth:`plot_series` but for internal residuals from last run. def plot_series_residuals_internal(self, varied_data, varied_idx, **kwargs): """ Analogous to :meth:`plot_series` but for internal residuals from last run. """ nf = len(self.f_cb(*self.pre_process( self.internal_xout...
Solve and plot for a series of a varied parameter. Convenience method, see :meth:`solve_series`, :meth:`plot_series` & :meth:`plot_series_residuals_internal` for more information. def solve_and_plot_series(self, x0, params, varied_data, varied_idx, solver=None, plot_kwargs=None, ...
Used internally for transformation of variables. def pre_process(self, x0, params=()): """ Used internally for transformation of variables. """ # Should be used by all methods matching "solve_*" if self.x_by_name and isinstance(x0, dict): x0 = [x0[k] for k in self.names] if ...
Used internally for transformation of variables. def post_process(self, xout, params_out): """ Used internally for transformation of variables. """ # Should be used by all methods matching "solve_*" for post_processor in self.post_processors: xout, params_out = post_processor(xout, ...
Solve with user specified ``solver`` choice. Parameters ---------- x0: 1D array of floats Guess (subject to ``self.post_processors``) params: 1D array_like of floats Parameters (subject to ``self.post_processors``) internal_x0: 1D array of floats ...
Uses ``scipy.optimize.root`` See: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html Parameters ---------- intern_x0: array_like initial guess tol: float Tolerance method: str What method to use. Defaults to ...
Solve the problem (systems of equations) Parameters ---------- x0 : array Guess. params : array See :meth:`NeqSys.solve`. internal_x0 : array See :meth:`NeqSys.solve`. solver : str or callable or iterable of such. See :meth...
Constructs a pyneqsys.symbolic.SymbolicSys instance and returns from its ``solve`` method. def solve(guess_a, guess_b, power, solver='scipy'): """ Constructs a pyneqsys.symbolic.SymbolicSys instance and returns from its ``solve`` method. """ # The problem is 2 dimensional so we need 2 symbols x = sp.symbol...
Example demonstrating how to solve a system of non-linear equations defined as SymPy expressions. The example shows how a non-linear problem can be given a command-line interface which may be preferred by end-users who are not familiar with Python. def main(guess_a=1., guess_b=0., power=3, savetxt='None', ver...
Plot the values of the solution vector vs the varied parameter. Parameters ---------- xres : array Solution vector of shape ``(varied_data.size, x0.size)``. varied_data : array Numerical values of the varied parameter. indices : iterable of integers, optional Indices of vari...
Places a legend box outside a matplotlib Axes instance. def mpl_outside_legend(ax, **kwargs): """ Places a legend box outside a matplotlib Axes instance. """ box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * 0.75, box.height]) # Put a legend to the right of the current axis ax.le...
Transform a linear system to reduced row-echelon form Transforms both the matrix and right-hand side of a linear system of equations to reduced row echelon form Parameters ---------- A : Matrix-like Iterable of rows. b : iterable Returns ------- A', b' - transformed versio...
Returns Ax - b Parameters ---------- A : matrix_like of numbers Of shape (len(b), len(x)). x : iterable of symbols b : array_like of numbers (default: None) When ``None``, assume zeros of length ``len(x)``. Matrix : class When ``rref == True``: A matrix class which suppo...
Generate a SymbolicSys instance from a callback. Parameters ---------- cb : callable Should have the signature ``cb(x, p, backend) -> list of exprs``. nx : int Number of unknowns, when not given it is deduced from ``kwargs['names']``. nparams : int ...
Return the jacobian of the expressions def get_jac(self): """ Return the jacobian of the expressions """ if self._jac is True: if self.band is None: f = self.be.Matrix(self.nf, 1, self.exprs) _x = self.be.Matrix(self.nx, 1, self.x) return f.ja...
Generate a TransformedSys instance from a callback Parameters ---------- cb : callable Should have the signature ``cb(x, p, backend) -> list of exprs``. The callback ``cb`` should return *untransformed* expressions. transf_cbs : pair or iterable of pairs of calla...
Write binary header data to a file handle. This method writes exactly 512 bytes to the beginning of the given file handle. Parameters ---------- handle : file handle The given handle will be reset to 0 using `seek` and then 512 bytes will be written to d...
Read and parse binary header data from a file handle. This method reads exactly 512 bytes from the beginning of the given file handle. Parameters ---------- handle : file handle The given handle will be reset to 0 using `seek` and then 512 bytes will be ...
Return the number of bytes needed to store this parameter. def binary_size(self): '''Return the number of bytes needed to store this parameter.''' return ( 1 + # group_id 2 + # next offset marker 1 + len(self.name.encode('utf-8')) + # size of name and name bytes ...
Write binary data for this parameter to a file handle. Parameters ---------- group_id : int The numerical ID of the group that holds this parameter. handle : file handle An open, writable, binary file handle. def write(self, group_id, handle): '''Write b...
Read binary data for this parameter from a file handle. This reads exactly enough data from the current position in the file to initialize the parameter. def read(self, handle): '''Read binary data for this parameter from a file handle. This reads exactly enough data from the current ...
Unpack the raw bytes of this param using the given data format. def _as_array(self, fmt): '''Unpack the raw bytes of this param using the given data format.''' assert self.dimensions, \ '{}: cannot get value as {} array!'.format(self.name, fmt) elems = array.array(fmt) elems...
Get the param as an array of raw byte strings. def bytes_array(self): '''Get the param as an array of raw byte strings.''' assert len(self.dimensions) == 2, \ '{}: cannot get value as bytes array!'.format(self.name) l, n = self.dimensions return [self.bytes[i*l:(i+1)*l] for ...
Get the param as a array of unicode strings. def string_array(self): '''Get the param as a array of unicode strings.''' assert len(self.dimensions) == 2, \ '{}: cannot get value as string array!'.format(self.name) l, n = self.dimensions return [self.bytes[i*l:(i+1)*l].decode...
Add a parameter to this group. Parameters ---------- name : str Name of the parameter to add to this group. The name will automatically be case-normalized. Additional keyword arguments will be passed to the `Param` constructor. def add_param(self, name, **kwarg...
Return the number of bytes to store this group and its parameters. def binary_size(self): '''Return the number of bytes to store this group and its parameters.''' return ( 1 + # group_id 1 + len(self.name.encode('utf-8')) + # size of name and name bytes 2 + # next of...
Write this parameter group, with parameters, to a file handle. Parameters ---------- group_id : int The numerical ID of the group. handle : file handle An open, writable, binary file handle. def write(self, group_id, handle): '''Write this parameter grou...
Ensure that the metadata in our file is self-consistent. def check_metadata(self): '''Ensure that the metadata in our file is self-consistent.''' assert self.header.point_count == self.point_used, ( 'inconsistent point count! {} header != {} POINT:USED'.format( self.header.p...
Add a new parameter group. Parameters ---------- group_id : int The numeric ID for a group to check or create. name : str, optional If a group is created, assign this name to the group. desc : str, optional If a group is created, assign this d...
Get a group or parameter. Parameters ---------- group : str If this string contains a period (.), then the part before the period will be used to retrieve a group, and the part after the period will be used to retrieve a parameter from that group. If this ...