text stringlengths 81 112k |
|---|
Returns the plural of a given word.
The inflection is based on probability rather than gender and role.
def pluralize(word, pos=NOUN, gender=MALE, role=SUBJECT, custom={}):
""" Returns the plural of a given word.
The inflection is based on probability rather than gender and role.
"""
w = wo... |
Returns the singular of a given word.
The inflection is based on probability rather than gender and role.
def singularize(word, pos=NOUN, gender=MALE, role=SUBJECT, custom={}):
""" Returns the singular of a given word.
The inflection is based on probability rather than gender and role.
"""
... |
For a predicative adjective, returns the attributive form (lowercase).
In German, the attributive is formed with -e, -em, -en, -er or -es,
depending on gender (masculine, feminine, neuter or plural) and role
(nominative, accusative, dative, genitive).
def attributive(adjective, gender=MALE, rol... |
Returns the predicative adjective (lowercase).
In German, the attributive form preceding a noun is always used:
"ein kleiner Junge" => strong, masculine, nominative,
"eine schöne Frau" => mixed, feminine, nominative,
"der kleine Prinz" => weak, masculine, nominative, etc.
The pre... |
Returns the comparative or superlative form of the given (inflected) adjective.
def grade(adjective, suffix=COMPARATIVE):
""" Returns the comparative or superlative form of the given (inflected) adjective.
"""
b = predicative(adjective)
# groß => großt, schön => schönst
if suffix == SUPERLATIVE and... |
Returns the base form of the given inflected verb, using a rule-based approach.
def find_lemma(self, verb):
""" Returns the base form of the given inflected verb, using a rule-based approach.
"""
v = verb.lower()
# Common prefixes: be-finden and emp-finden probably inflect like finden.
... |
For a regular verb (base form), returns the forms using a rule-based approach.
def find_lexeme(self, verb):
""" For a regular verb (base form), returns the forms using a rule-based approach.
"""
v = verb.lower()
# Stem = infinitive minus -en, -ln, -rn.
b = b0 = re.sub("en$", "",... |
Returns a list of possible tenses for the given inflected verb.
def tenses(self, verb, parse=True):
""" Returns a list of possible tenses for the given inflected verb.
"""
tenses = _Verbs.tenses(self, verb, parse)
if len(tenses) == 0:
# auswirkte => wirkte aus
fo... |
Return a set of all words in a dataset.
:param dataset: A list of tuples of the form ``(words, label)`` where
``words`` is either a string of a list of tokens.
def _get_words_from_dataset(dataset):
"""Return a set of all words in a dataset.
:param dataset: A list of tuples of the form ``(words, l... |
A basic document feature extractor that returns a dict indicating what
words in ``train_set`` are contained in ``document``.
:param document: The text to extract features from. Can be a string or an iterable.
:param list train_set: Training data set, a list of tuples of the form
``(words, label)``.... |
A basic document feature extractor that returns a dict of words that the
document contains.
def contains_extractor(document):
"""A basic document feature extractor that returns a dict of words that the
document contains."""
tokens = _get_document_tokens(document)
features = dict((u'contains({0})'.f... |
Reads a data file and returns and iterable that can be used as
testing or training data.
def _read_data(self, dataset, format=None):
"""Reads a data file and returns and iterable that can be used as
testing or training data."""
# Attempt to detect file format if "format" isn't specified... |
Extracts features from a body of text.
:rtype: dictionary of features
def extract_features(self, text):
"""Extracts features from a body of text.
:rtype: dictionary of features
"""
# Feature extractor may take one or two arguments
try:
return self.feature_... |
Train the classifier with a labeled feature set and return the
classifier. Takes the same arguments as the wrapped NLTK class. This
method is implicitly called when calling ``classify`` or ``accuracy``
methods and is included only to allow passing in arguments to the
``train`` method of ... |
Classifies the text.
:param str text: A string of text.
def classify(self, text):
"""Classifies the text.
:param str text: A string of text.
"""
text_features = self.extract_features(text)
return self.classifier.classify(text_features) |
Compute the accuracy on a test set.
:param test_set: A list of tuples of the form ``(text, label)``, or a
filename.
:param format: If ``test_set`` is a filename, the file format, e.g.
``"csv"`` or ``"json"``. If ``None``, will attempt to detect the
file format.
def ... |
Update the classifier with new training data and re-trains the
classifier.
:param new_data: New data as a list of tuples of the form
``(text, label)``.
def update(self, new_data, *args, **kwargs):
'''Update the classifier with new training data and re-trains the
classifier.... |
Return the label probability distribution for classifying a string
of text.
Example:
::
>>> classifier = NaiveBayesClassifier(train_data)
>>> prob_dist = classifier.prob_classify("I feel happy this morning.")
>>> prob_dist.max()
'positive'
... |
Train the classifier with a labeled and unlabeled feature sets and
return the classifier. Takes the same arguments as the wrapped NLTK
class. This method is implicitly called when calling ``classify`` or
``accuracy`` methods and is included only to allow passing in arguments
to the ``tra... |
Update the classifier with new data and re-trains the
classifier.
:param new_positive_data: List of new, labeled strings.
:param new_unlabeled_data: List of new, unlabeled strings.
def update(self, new_positive_data=None,
new_unlabeled_data=None, positive_prob_prior=0.5,
... |
Return the label probability distribution for classifying a string
of text.
Example:
::
>>> classifier = MaxEntClassifier(train_data)
>>> prob_dist = classifier.prob_classify("I feel happy this morning.")
>>> prob_dist.max()
'positive'
... |
Return a list of (lemma, tag) tuples.
:param str text: A string.
def lemmatize(self, text):
"""Return a list of (lemma, tag) tuples.
:param str text: A string.
"""
#: Do not process empty strings (Issue #3)
if text.strip() == "":
return []
parsed_s... |
Parse text (string) and return list of parsed sentences (strings).
Each sentence consists of space separated token elements and the
token format returned by the PatternParser is WORD/TAG/PHRASE/ROLE/LEMMA
(separated by a forward slash '/')
:param str text: A string.
def _parse_text(se... |
Returns True if the pattern matches the given word string.
The pattern can include a wildcard (*front, back*, *both*, in*side),
or it can be a compiled regular expression.
def _match(string, pattern):
""" Returns True if the pattern matches the given word string.
The pattern can include a w... |
Returns a list copy in which each item occurs only once (in-order).
def unique(iterable):
""" Returns a list copy in which each item occurs only once (in-order).
"""
seen = set()
return [x for x in iterable if x not in seen and not seen.add(x)] |
Yields all permutations with replacement:
list(product("cat", repeat=2)) =>
[("c", "c"),
("c", "a"),
("c", "t"),
("a", "c"),
("a", "a"),
("a", "t"),
("t", "c"),
("t", "a"),
("t", "t")]
def product(*args, **kwargs):
"""... |
Returns all possible variations of a sequence with optional items.
def variations(iterable, optional=lambda x: False):
""" Returns all possible variations of a sequence with optional items.
"""
# For example: variations(["A?", "B?", "C"], optional=lambda s: s.endswith("?"))
# defines a sequence where c... |
Returns a Pattern from the given string or regular expression.
Recently compiled patterns are kept in cache
(if they do not use taxonomies, which are mutable dicts).
def compile(pattern, *args, **kwargs):
""" Returns a Pattern from the given string or regular expression.
Recently compiled p... |
Returns True if pattern.search(Sentence(string)) may yield matches.
If is often faster to scan prior to creating a Sentence and searching it.
def scan(pattern, string, *args, **kwargs):
""" Returns True if pattern.search(Sentence(string)) may yield matches.
If is often faster to scan prior to creat... |
Returns the first match found in the given sentence, or None.
def match(pattern, sentence, *args, **kwargs):
""" Returns the first match found in the given sentence, or None.
"""
return compile(pattern, *args, **kwargs).match(sentence) |
Returns a list of all matches found in the given sentence.
def search(pattern, sentence, *args, **kwargs):
""" Returns a list of all matches found in the given sentence.
"""
return compile(pattern, *args, **kwargs).search(sentence) |
Adds a new item from the given (key, value)-tuple.
If the key exists, pushes the updated item to the head of the dict.
def push(self, kv):
""" Adds a new item from the given (key, value)-tuple.
If the key exists, pushes the updated item to the head of the dict.
"""
if kv... |
Appends the given term to the taxonomy and tags it as the given type.
Optionally, a disambiguation value can be supplied.
For example: taxonomy.append("many", "quantity", "50-200")
def append(self, term, type=None, value=None):
""" Appends the given term to the taxonomy and tags it as t... |
Returns the (most recently added) semantic type for the given term ("many" => "quantity").
If the term is not in the dictionary, try Taxonomy.classifiers.
def classify(self, term, **kwargs):
""" Returns the (most recently added) semantic type for the given term ("many" => "quantity").
I... |
Returns a list of all semantic types for the given term.
If recursive=True, traverses parents up to the root.
def parents(self, term, recursive=False, **kwargs):
""" Returns a list of all semantic types for the given term.
If recursive=True, traverses parents up to the root.
"""... |
Returns the value of the given term ("many" => "50-200")
def value(self, term, **kwargs):
""" Returns the value of the given term ("many" => "50-200")
"""
term = self._normalize(term)
if term in self._values:
return self._values[term]
for classifier in self.classifie... |
Returns a new Constraint from the given string.
Uppercase words indicate either a tag ("NN", "JJ", "VP")
or a taxonomy term (e.g., "PRODUCT", "PERSON").
Syntax:
( defines an optional constraint, e.g., "(JJ)".
[ defines a constraint with spaces, e.g., "[Mac OS ... |
Return True if the given Word is part of the constraint:
- the word (or lemma) occurs in Constraint.words, OR
- the word (or lemma) occurs in Constraint.taxa taxonomy tree, AND
- the word and/or chunk tags match those defined in the constraint.
Individual terms in Constra... |
Returns a new Pattern from the given string.
Constraints are separated by a space.
If a constraint contains a space, it must be wrapped in [].
def fromstring(cls, s, *args, **kwargs):
""" Returns a new Pattern from the given string.
Constraints are separated by a space.
... |
Returns True if search(Sentence(string)) may yield matches.
If is often faster to scan prior to creating a Sentence and searching it.
def scan(self, string):
""" Returns True if search(Sentence(string)) may yield matches.
If is often faster to scan prior to creating a Sentence and searc... |
Returns a list of all matches found in the given sentence.
def search(self, sentence):
""" Returns a list of all matches found in the given sentence.
"""
if sentence.__class__.__name__ == "Sentence":
pass
elif isinstance(sentence, list) or sentence.__class__.__name__ == "Tex... |
Returns the first match found in the given sentence, or None.
def match(self, sentence, start=0, _v=None, _u=None):
""" Returns the first match found in the given sentence, or None.
"""
if sentence.__class__.__name__ == "Sentence":
pass
elif isinstance(sentence, list) or sen... |
Returns the constraint that matches the given Word, or None.
def constraint(self, word):
""" Returns the constraint that matches the given Word, or None.
"""
if word.index in self._map1:
return self._map1[word.index] |
Returns a list of constraints that match the given Chunk.
def constraints(self, chunk):
""" Returns a list of constraints that match the given Chunk.
"""
a = [self._map1[w.index] for w in chunk.words if w.index in self._map1]
b = []; [b.append(constraint) for constraint in a if constrai... |
Returns a list of Word and Chunk objects,
where words have been grouped into their chunks whenever possible.
Optionally, returns only chunks/words that match given constraint(s), or constraint index.
def constituents(self, constraint=None):
""" Returns a list of Word and Chunk objects,... |
Returns a list of Word objects that match the given group.
With chunked=True, returns a list of Word + Chunk objects - see Match.constituents().
A group consists of consecutive constraints wrapped in { }, e.g.,
search("{JJ JJ} NN", Sentence(parse("big black cat"))).group(1) => big bl... |
Return the sentiment as a tuple of the form:
``(polarity, subjectivity)``
:param str text: A string.
.. todo::
Figure out best format to be passed to the analyzer.
There might be a better format than a string of space separated
lemmas (e.g. with pos tags) b... |
Converts an STTS tag to a universal tag.
For example: ohne/APPR => ohne/PREP
def stts2universal(token, tag):
""" Converts an STTS tag to a universal tag.
For example: ohne/APPR => ohne/PREP
"""
if tag in ("KON", "KOUI", "KOUS", "KOKOM"):
return (token, CONJ)
if tag in ("PTKZU", ... |
Annotates the tokens with lemmata for plural nouns and conjugated verbs,
where each token is a [word, part-of-speech] list.
def find_lemmata(tokens):
""" Annotates the tokens with lemmata for plural nouns and conjugated verbs,
where each token is a [word, part-of-speech] list.
"""
for token... |
Returns a parsed Text from the given parsed string.
def tree(s, token=[WORD, POS, CHUNK, PNP, REL, LEMMA]):
""" Returns a parsed Text from the given parsed string.
"""
return Text(s, token) |
Returns a list of (token, tag)-tuples from the given string.
def tag(s, tokenize=True, encoding="utf-8", **kwargs):
""" Returns a list of (token, tag)-tuples from the given string.
"""
tags = []
for sentence in parse(s, tokenize, True, False, False, False, encoding, **kwargs).split():
for token... |
Returns a sorted list of keywords in the given string.
def keywords(s, top=10, **kwargs):
""" Returns a sorted list of keywords in the given string.
"""
return parser.find_keywords(s, top=top, frequency=parser.frequency) |
Convenience function for tokenizing sentences (not iterable).
If tokenizer is not specified, the default tokenizer NLTKPunktTokenizer()
is used (same behaviour as in the main `TextBlob`_ library).
This function returns the sentences as a generator object.
.. _TextBlob: http://textblob.readthedocs.org... |
Convenience function for tokenizing text into words.
NOTE: NLTK's word tokenizer expects sentences as input, so the text will be
tokenized to sentences before being tokenized to words.
This function returns an itertools chain object (generator).
def word_tokenize(text, tokenizer=None, include_punc=True, ... |
Return a list of word tokens.
:param text: string of text.
:param include_punc: (optional) whether to include punctuation as separate
tokens. Default to True.
:param nested: (optional) whether to return tokens as nested lists of
sentences. Default to False.
def tokenize... |
NLTK's sentence tokenizer (currently PunktSentenceTokenizer).
Uses an unsupervised algorithm to build a model for abbreviation
words, collocations, and words that start sentences, then uses
that to find sentence boundaries.
def sent_tokenize(self, text, **kwargs):
"""NLTK's sentence to... |
The Treebank tokenizer uses regular expressions to tokenize text as
in Penn Treebank.
It assumes that the text has already been segmented into sentences,
e.g. using ``self.sent_tokenize()``.
This tokenizer performs the following steps:
- split standard contractions, e.g. ``don... |
Returns a list of sentences.
Each sentence is a space-separated string of tokens (words).
Handles common cases of abbreviations (e.g., etc., ...).
Punctuation marks are split from other words. Periods (or ?!) mark the end of a sentence.
Headings without an ending period are inferred by ... |
Return a list of word tokens.
:param text: string of text.
:param include_punc: (optional) whether to include punctuation as separate
tokens. Default to True.
def tokenize(self, text, include_punc=True, **kwargs):
"""Return a list of word tokens.
:param text: string of tex... |
Parses the text.
``pattern.de.parse(**kwargs)`` can be passed to the parser instance and
are documented in the main docstring of
:class:`PatternParser() <textblob_de.parsers.PatternParser>`.
:param str text: A string.
def parse(self, text):
"""Parses the text.
``patte... |
Return a list of noun phrases (strings) for a body of text.
:param str text: A string.
def extract(self, text):
"""Return a list of noun phrases (strings) for a body of text.
:param str text: A string.
"""
_extracted = []
if text.strip() == "":
return _ext... |
Filter insignificant words for key noun phrase extraction.
determiners, relative pronouns, reflexive pronouns
In general, pronouns are not useful, as you need context to know what they refer to.
Most of the pronouns, however, are filtered out by blob.noun_phrase method's
np length (>1) ... |
Parse text (string) and return list of parsed sentences (strings).
Each sentence consists of space separated token elements and the
token format returned by the PatternParser is WORD/TAG/PHRASE/ROLE/(LEMMA)
(separated by a forward slash '/')
:param str text: A string.
def _parse_text(... |
Tag a string `sentence`.
:param str or list sentence: A string or a list of sentence strings.
:param tokenize: (optional) If ``False`` string has to be tokenized before
(space separated string).
def tag(self, sentence, tokenize=True):
"""Tag a string `sentence`.
:param str... |
Returns the given value as a Unicode string (if possible).
def decode_string(v, encoding="utf-8"):
"""Returns the given value as a Unicode string (if possible)."""
if isinstance(encoding, basestring):
encoding = ((encoding,),) + (("windows-1252",), ("utf-8", "ignore"))
if isinstance(v, binary_type)... |
Returns the given value as a Python byte string (if possible).
def encode_string(v, encoding="utf-8"):
"""Returns the given value as a Python byte string (if possible)."""
if isinstance(encoding, basestring):
encoding = ((encoding,),) + (("windows-1252",), ("utf-8", "ignore"))
if isinstance(v, unic... |
Given a command, mode, and a PATH string, return the path which conforms
to the given mode on the PATH, or None if there is no such file.
`mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
of os.environ.get("PATH"), or can be overridden with a custom search
path.
def _shutil_which(cmd... |
Translate the word to another language using Google's Translate API.
.. versionadded:: 0.5.0 (``textblob``)
def translate(self, from_lang=None, to="de"):
"""Translate the word to another language using Google's Translate API.
.. versionadded:: 0.5.0 (``textblob``)
"""
if from... |
Return the lemma of each word in this WordList.
Currently using NLTKPunktTokenizer() for all lemmatization
tasks. This might cause slightly different tokenization results
compared to the TextBlob.words property.
def lemmatize(self):
"""Return the lemma of each word in this WordList.
... |
Return a list of tokens, using ``tokenizer``.
:param tokenizer: (optional) A tokenizer object. If None, defaults to
this blob's default tokenizer.
def tokenize(self, tokenizer=None):
"""Return a list of tokens, using ``tokenizer``.
:param tokenizer: (optional) A tokenizer object. ... |
Returns a list of noun phrases for this blob.
def noun_phrases(self):
"""Returns a list of noun phrases for this blob."""
return WordList([phrase.strip()
for phrase in self.np_extractor.extract(self.raw)
if len(phrase.split()) > 1]) |
Returns an list of tuples of the form (word, POS tag).
Example:
::
[('At', 'IN'), ('eight', 'CD'), ("o'clock", 'JJ'), ('on', 'IN'),
('Thursday', 'NNP'), ('morning', 'NN')]
:rtype: list of tuples
def pos_tags(self):
"""Returns an list of tuples of the f... |
Dictionary of word frequencies in this text.
def word_counts(self):
"""Dictionary of word frequencies in this text."""
counts = defaultdict(int)
stripped_words = [lowerstrip(word) for word in self.words]
for word in stripped_words:
counts[word] += 1
return counts |
The dict representation of this sentence.
def dict(self):
"""The dict representation of this sentence."""
return {
'raw': self.raw,
'start_index': self.start_index,
'end_index': self.end_index,
'stripped': self.stripped,
'noun_phrases': self.n... |
Return a list of word tokens. This excludes punctuation characters.
If you want to include punctuation characters, access the ``tokens``
property.
:returns: A :class:`WordList <WordList>` of word tokens.
def words(self):
"""Return a list of word tokens. This excludes punctuation charac... |
Return a tuple of form (polarity, subjectivity ) where polarity
is a float within the range [-1.0, 1.0] and subjectivity is a float
within the range [0.0, 1.0] where 0.0 is very objective and 1.0 is
very subjective.
:rtype: named tuple of the form ``Sentiment(polarity=0.0, subjectivity=... |
Return a json representation (str) of this blob. Takes the same
arguments as json.dumps.
.. versionadded:: 0.5.1 (``textblob``)
def to_json(self, *args, **kwargs):
"""Return a json representation (str) of this blob. Takes the same
arguments as json.dumps.
.. versionadded:: 0.5... |
Returns a list of Sentence objects from the raw text.
def _create_sentence_objects(self):
"""Returns a list of Sentence objects from the raw text."""
sentence_objects = []
sentences = sent_tokenize(self.raw, tokenizer=self.tokenizer)
char_index = 0 # Keeps track of character index with... |
Returns a list of n-grams (tuples of n successive words) from the given string.
Alternatively, you can supply a Text or Sentence object.
With continuous=False, n-grams will not run over sentence markers (i.e., .!?).
Punctuation marks are stripped from words.
def ngrams(string, n=3, punctuation=... |
Returns the string with no more than n repeated characters, e.g.,
deflood("NIIIICE!!", n=1) => "Nice!"
deflood("nice.....", n=3) => "nice..."
def deflood(s, n=3):
""" Returns the string with no more than n repeated characters, e.g.,
deflood("NIIIICE!!", n=1) => "Nice!"
deflood("nice... |
Pretty-prints the output of Parser.parse() as a table with outlined columns.
Alternatively, you can supply a tree.Text or tree.Sentence object.
def pprint(string, token=[WORD, POS, CHUNK, PNP], column=4):
""" Pretty-prints the output of Parser.parse() as a table with outlined columns.
Alternatively... |
Returns an iterator over the lines in the file at the given path,
strippping comments and decoding each line to Unicode.
def _read(path, encoding="utf-8", comment=";;;"):
""" Returns an iterator over the lines in the file at the given path,
strippping comments and decoding each line to Unicode.
... |
Returns a (token, tag)-tuple with a simplified universal part-of-speech tag.
def penntreebank2universal(token, tag):
""" Returns a (token, tag)-tuple with a simplified universal part-of-speech tag.
"""
if tag.startswith(("NNP-", "NNPS-")):
return (token, "%s-%s" % (NOUN, tag.split("-")[-1]))
if... |
Returns a list of sentences. Each sentence is a space-separated string of tokens (words).
Handles common cases of abbreviations (e.g., etc., ...).
Punctuation marks are split from other words. Periods (or ?!) mark the end of a sentence.
Headings without an ending period are inferred by line brea... |
Default morphological tagging rules for English, based on word suffixes.
def _suffix_rules(token, tag="NN"):
""" Default morphological tagging rules for English, based on word suffixes.
"""
if isinstance(token, (list, tuple)):
token, tag = token
if token.endswith("ing"):
tag = "VBG"
... |
Returns a list of [token, tag]-items for the given list of tokens:
["The", "cat", "purs"] => [["The", "DT"], ["cat", "NN"], ["purs", "VB"]]
Words are tagged using the given lexicon of (word, tag)-items.
Unknown words are tagged NN by default.
Unknown words that start with a capital lette... |
The input is a list of [token, tag]-items.
The output is a list of [token, tag, chunk]-items:
The/DT nice/JJ fish/NN is/VBZ dead/JJ ./. =>
The/DT/B-NP nice/JJ/I-NP fish/NN/I-NP is/VBZ/B-VP dead/JJ/B-ADJP ././O
def find_chunks(tagged, language="en"):
""" The input is a list of [token, tag]-i... |
The input is a list of [token, tag, chunk]-items.
The output is a list of [token, tag, chunk, preposition]-items.
PP-chunks followed by NP-chunks make up a PNP-chunk.
def find_prepositions(chunked):
""" The input is a list of [token, tag, chunk]-items.
The output is a list of [token, tag, c... |
The input is a list of [token, tag, chunk]-items.
The output is a list of [token, tag, chunk, relation]-items.
A noun phrase preceding a verb phrase is perceived as sentence subject.
A noun phrase following a verb phrase is perceived as sentence object.
def find_relations(chunked):
""" The ... |
Returns a sorted list of keywords in the given string.
The given parser (e.g., pattern.en.parser) is used to identify noun phrases.
The given frequency dictionary can be a reference corpus,
with relative document frequency (df, 0.0-1.0) for each lemma,
e.g., {"the": 0.8, "cat": 0.1, ...... |
Returns the tense id for a given (tense, person, number, mood, aspect, negated).
Aliases and compound forms (e.g., IMPERFECT) are disambiguated.
def tense_id(*args, **kwargs):
""" Returns the tense id for a given (tense, person, number, mood, aspect, negated).
Aliases and compound forms (e.g., IMPE... |
Returns the value from the function with the given name in the given language module.
By default, language="en".
def _multilingual(function, *args, **kwargs):
""" Returns the value from the function with the given name in the given language module.
By default, language="en".
"""
return geta... |
Returns a (language, confidence)-tuple for the given string.
def language(s):
""" Returns a (language, confidence)-tuple for the given string.
"""
s = decode_utf8(s)
s = set(w.strip(PUNCTUATION) for w in s.replace("'", "' ").split())
n = float(len(s) or 1)
p = {}
for xx in LANGUAGES:
... |
If the list is empty, calls lazylist.load().
Replaces lazylist.method() with list.method() and calls it.
def _lazy(self, method, *args):
""" If the list is empty, calls lazylist.load().
Replaces lazylist.method() with list.method() and calls it.
"""
if list.__len__(self)... |
Trains the model to predict the given tag for the given token,
in context of the given previous and next (token, tag)-tuples.
def train(self, token, tag, previous=None, next=None):
""" Trains the model to predict the given tag for the given token,
in context of the given previous and ne... |
Returns the predicted tag for the given token,
in context of the given previous and next (token, tag)-tuples.
def classify(self, token, previous=None, next=None, **kwargs):
""" Returns the predicted tag for the given token,
in context of the given previous and next (token, tag)-tuples.
... |
Returns a (token, tag)-tuple for the given token,
in context of the given previous and next (token, tag)-tuples.
def apply(self, token, previous=(None, None), next=(None, None)):
""" Returns a (token, tag)-tuple for the given token,
in context of the given previous and next (token, tag)... |
Returns a training vector for the given (word, tag)-tuple and its context.
def _v(self, token, previous=None, next=None):
""" Returns a training vector for the given (word, tag)-tuple and its context.
"""
def f(v, s1, s2):
if s2:
v[s1 + " " + s2] = 1
p, n = ... |
Applies lexical rules to the given token, which is a [word, tag] list.
def apply(self, token, previous=(None, None), next=(None, None)):
""" Applies lexical rules to the given token, which is a [word, tag] list.
"""
w = token[0]
for r in self:
if r[1] in self._cmd: # Rule = ... |
Inserts a new rule that assigns the given tag to words with the given affix,
e.g., Morphology.append("RB", "-ly").
def insert(self, i, tag, affix, cmd="hassuf", tagged=None):
""" Inserts a new rule that assigns the given tag to words with the given affix,
e.g., Morphology.append("RB", "... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.