text stringlengths 81 112k |
|---|
Returns 0 if successful or -1 if the address is out of range
def FSeek(params, ctxt, scope, stream, coord):
"""Returns 0 if successful or -1 if the address is out of range
"""
if len(params) != 1:
raise errors.InvalidArguments(coord, "{} args".format(len(params)), "FSeek accepts only one argument")... |
Returns 0 if successful or -1 if the address is out of range
def FSkip(params, ctxt, scope, stream, coord):
"""Returns 0 if successful or -1 if the address is out of range
"""
if len(params) != 1:
raise errors.InvalidArguments(coord, "{} args".format(len(params)), "FSkip accepts only one argument")... |
``PackerGZip`` - implements both unpacking and packing. Can be used
as the ``packer`` for a field. When packing, concats the build output
of all params and gzip-compresses the result. When unpacking, concats
the build output of all params and gzip-decompresses the result.
Example:
The code... |
``PackGZip`` - Concats the build output of all params and gzips the
resulting data, returning a char array.
Example: ::
char data[0x100]<pack=PackGZip, ...>;
def pack_gzip(params, ctxt, scope, stream, coord):
"""``PackGZip`` - Concats the build output of all params and gzips the
resulting dat... |
WatchLength - Watch the total length of each of the params.
Example:
The code below uses the ``WatchLength`` update function to update
the ``length`` field to the length of the ``data`` field ::
int length<watch=data, update=WatchLength>;
char data[length];
def watch_l... |
WatchCrc32 - Watch the total crc32 of the params.
Example:
The code below uses the ``WatchCrc32`` update function to update
the ``crc`` field to the crc of the ``length`` and ``data`` fields ::
char length;
char data[length];
int crc<watch=length;data, updat... |
validate folder takes a cloned github repo, ensures
the existence of the config.json, and validates it.
def _validate_folder(self, folder=None):
''' validate folder takes a cloned github repo, ensures
the existence of the config.json, and validates it.
'''
from expfactor... |
validate is the entrypoint to all validation, for
a folder, config, or url. If a URL is found, it is
cloned and cleaned up.
:param validate_folder: ensures the folder name (github repo)
matches.
def validate(self, folder, cleanup=False, validate_fol... |
validate config is the primary validation function that checks
for presence and format of required fields.
Parameters
==========
:folder: full path to folder with config.json
:name: if provided, the folder name to check against exp_id
def _validate_config(self, folder, vali... |
get_validation_fields returns a list of tuples (each a field)
we only require the exp_id to coincide with the folder name, for the sake
of reproducibility (given that all are served from sample image or Github
organization). All other fields are optional.
To specify runtime v... |
get_runtime_vars will return the urlparsed string of one or more runtime
variables. If None are present, None is returned.
Parameters
==========
varset: the variable set, a dictionary lookup with exp_id, token, vars
experiment: the exp_id to look up
token: the participant id... |
generate a lookup data structure from a
delimited file. We typically obtain the file name and delimiter from
the environment by way of EXPFACTORY_RUNTIME_VARS, and
EXPFACTORY_RUNTIME_DELIM, respectively, but the user can also parse
from a custom variable file by way of specifying it to the ... |
read the entire runtime variable file, and return a list of lists,
each corresponding to a row. We also check the header, and exit
if anything is missing or malformed.
Parameters
==========
variable_file: full path to the tabular file with token, exp_id, etc.
sep: the default... |
validate_row will ensure that a row has the proper length, and is
not empty and cleaned of extra spaces.
Parameters
==========
row: a single row, not yet parsed.
Returns a valid row, or None if not valid
def _validate_row(row, sep=',', required_length=None):
'''validate_row wi... |
parse row is a helper function to simply clean up a string, and parse
into a row based on a delimiter. If a required length is provided,
we check for this too.
def _parse_row(row, sep=','):
'''parse row is a helper function to simply clean up a string, and parse
into a row based on a delimiter... |
validate_header ensures that the first row contains the exp_id,
var_name, var_value, and token. Capitalization isn't important, but
ordering is. This criteria is very strict, but it's reasonable
to require.
Parameters
==========
header: the header row, as a list
requir... |
Decorator for views that checks that the user is logged in and is a staff
member, displaying the login page if necessary.
def superuser_required(view_func):
"""
Decorator for views that checks that the user is logged in and is a staff
member, displaying the login page if necessary.
"""
@wraps(v... |
Compiles the pattern lines.
*pattern_factory* can be either the name of a registered pattern
factory (:class:`str`), or a :class:`~collections.abc.Callable` used
to compile patterns. It must accept an uncompiled pattern (:class:`str`)
and return the compiled pattern (:class:`.Pattern`).
*lines* (:class:`~co... |
Matches the file to this path-spec.
*file* (:class:`str`) is the file path to be matched against
:attr:`self.patterns <PathSpec.patterns>`.
*separators* (:class:`~collections.abc.Collection` of :class:`str`)
optionally contains the path separators to normalize. See
:func:`~pathspec.util.normalize_file` for ... |
Matches the files to this path-spec.
*files* (:class:`~collections.abc.Iterable` of :class:`str`) contains
the file paths to be matched against :attr:`self.patterns
<PathSpec.patterns>`.
*separators* (:class:`~collections.abc.Collection` of :class:`str`;
or :data:`None`) optionally contains the path separat... |
Walks the specified root path for all files and matches them to this
path-spec.
*root* (:class:`str`) is the root directory to search for files.
*on_error* (:class:`~collections.abc.Callable` or :data:`None`)
optionally is the error handler for file-system exceptions. See
:func:`~pathspec.util.iter_tree` fo... |
Convert the pattern into a regular expression.
*pattern* (:class:`unicode` or :class:`bytes`) is the pattern to
convert into a regular expression.
Returns the uncompiled regular expression (:class:`unicode`, :class:`bytes`,
or :data:`None`), and whether matched files should be included
(:data:`True`), exclu... |
Translates the glob pattern to a regular expression. This is used in
the constructor to translate a path segment glob pattern to its
corresponding regular expression.
*pattern* (:class:`str`) is the glob pattern.
Returns the regular expression (:class:`str`).
def _translate_segment_glob(pattern):
"""
Tra... |
Warn about deprecation.
def pattern_to_regex(cls, *args, **kw):
"""
Warn about deprecation.
"""
cls._deprecated()
return super(GitIgnorePattern, cls).pattern_to_regex(*args, **kw) |
Walks the specified directory for all files.
*root* (:class:`str`) is the root directory to search for files.
*on_error* (:class:`~collections.abc.Callable` or :data:`None`)
optionally is the error handler for file-system exceptions. It will be
called with the exception (:exc:`OSError`). Reraise the exception to
... |
Scan the directory for all descendant files.
*root_full* (:class:`str`) the absolute path to the root directory.
*dir_rel* (:class:`str`) the path to the directory to scan relative to
*root_full*.
*memo* (:class:`dict`) keeps track of ancestor directories
encountered. Maps each ancestor real path (:class:`str``... |
Matches the file to the patterns.
*patterns* (:class:`~collections.abc.Iterable` of :class:`~pathspec.pattern.Pattern`)
contains the patterns to use.
*file* (:class:`str`) is the normalized file path to be matched
against *patterns*.
Returns :data:`True` if *file* matched; otherwise, :data:`False`.
def match_f... |
Matches the files to the patterns.
*patterns* (:class:`~collections.abc.Iterable` of :class:`~pathspec.pattern.Pattern`)
contains the patterns to use.
*files* (:class:`~collections.abc.Iterable` of :class:`str`) contains
the normalized file paths to be matched against *patterns*.
Returns the matched files (:cla... |
Normalizes the file path to use the POSIX path separator (i.e., ``'/'``).
*file* (:class:`str`) is the file path.
*separators* (:class:`~collections.abc.Collection` of :class:`str`; or
:data:`None`) optionally contains the path separators to normalize.
This does not need to include the POSIX path separator (``'/'... |
Normalizes the file paths to use the POSIX path separator.
*files* (:class:`~collections.abc.Iterable` of :class:`str`) contains
the file paths to be normalized.
*separators* (:class:`~collections.abc.Collection` of :class:`str`; or
:data:`None`) optionally contains the path separators to normalize.
See :func:`n... |
Registers the specified pattern factory.
*name* (:class:`str`) is the name to register the pattern factory
under.
*pattern_factory* (:class:`~collections.abc.Callable`) is used to
compile patterns. It must accept an uncompiled pattern (:class:`str`)
and return the compiled pattern (:class:`.Pattern`).
*overrid... |
*message* (:class:`str`) is the error message.
def message(self):
"""
*message* (:class:`str`) is the error message.
"""
return "Real path {real!r} was encountered at {first!r} and then {second!r}.".format(
real=self.real_path,
first=self.first_path,
second=self.second_path,
) |
Matches this pattern against the specified files.
*files* (:class:`~collections.abc.Iterable` of :class:`str`) contains
each file relative to the root directory (e.g., ``"relative/path/to/file"``).
Returns an :class:`~collections.abc.Iterable` yielding each matched
file path (:class:`str`).
def match(self, f... |
Matches this pattern against the specified files.
*files* (:class:`~collections.abc.Iterable` of :class:`str`)
contains each file relative to the root directory (e.g., "relative/path/to/file").
Returns an :class:`~collections.abc.Iterable` yielding each matched
file path (:class:`str`).
def match(self, files... |
Convert a User to a cached instance representation.
def user_default_serializer(self, obj):
"""Convert a User to a cached instance representation."""
if not obj:
return None
self.user_default_add_related_pks(obj)
return dict((
('id', obj.id),
('userna... |
Load a User from the database.
def user_default_loader(self, pk):
"""Load a User from the database."""
try:
obj = User.objects.get(pk=pk)
except User.DoesNotExist:
return None
else:
self.user_default_add_related_pks(obj)
return obj |
Add related primary keys to a User instance.
def user_default_add_related_pks(self, obj):
"""Add related primary keys to a User instance."""
if not hasattr(obj, '_votes_pks'):
obj._votes_pks = list(obj.votes.values_list('pk', flat=True)) |
Invalidated cached items when the Group changes.
def group_default_invalidator(self, obj):
"""Invalidated cached items when the Group changes."""
user_pks = User.objects.values_list('pk', flat=True)
return [('User', pk, False) for pk in user_pks] |
Convert a Question to a cached instance representation.
def question_default_serializer(self, obj):
"""Convert a Question to a cached instance representation."""
if not obj:
return None
self.question_default_add_related_pks(obj)
return dict((
('id', obj.id),
... |
Load a Question from the database.
def question_default_loader(self, pk):
"""Load a Question from the database."""
try:
obj = Question.objects.get(pk=pk)
except Question.DoesNotExist:
return None
else:
self.question_default_add_related_pks(obj)
... |
Add related primary keys to a Question instance.
def question_default_add_related_pks(self, obj):
"""Add related primary keys to a Question instance."""
if not hasattr(obj, '_choice_pks'):
obj._choice_pks = list(obj.choices.values_list('pk', flat=True)) |
Convert a Choice to a cached instance representation.
def choice_default_serializer(self, obj):
"""Convert a Choice to a cached instance representation."""
if not obj:
return None
self.choice_default_add_related_pks(obj)
return dict((
('id', obj.id),
... |
Load a Choice from the database.
def choice_default_loader(self, pk):
"""Load a Choice from the database."""
try:
obj = Choice.objects.get(pk=pk)
except Choice.DoesNotExist:
return None
else:
self.choice_default_add_related_pks(obj)
return... |
Add related primary keys to a Choice instance.
def choice_default_add_related_pks(self, obj):
"""Add related primary keys to a Choice instance."""
if not hasattr(obj, '_voter_pks'):
obj._voter_pks = obj.voters.values_list('pk', flat=True) |
Invalidated cached items when the Choice changes.
def choice_default_invalidator(self, obj):
"""Invalidated cached items when the Choice changes."""
invalid = [('Question', obj.question_id, True)]
for pk in obj.voters.values_list('pk', flat=True):
invalid.append(('User', pk, False))... |
Get the Django cache interface.
This allows disabling the cache with
settings.USE_DRF_INSTANCE_CACHE=False. It also delays import so that
Django Debug Toolbar will record cache requests.
def cache(self):
"""Get the Django cache interface.
This allows disabling the cache with
... |
Delete all versions of a cached instance.
def delete_all_versions(self, model_name, obj_pk):
"""Delete all versions of a cached instance."""
if self.cache:
for version in self.versions:
key = self.key_for(version, model_name, obj_pk)
self.cache.delete(key) |
Return the model-specific caching function.
def model_function(self, model_name, version, func_name):
"""Return the model-specific caching function."""
assert func_name in ('serializer', 'loader', 'invalidator')
name = "%s_%s_%s" % (model_name.lower(), version, func_name)
return getattr... |
Return the field function.
def field_function(self, type_code, func_name):
"""Return the field function."""
assert func_name in ('to_json', 'from_json')
name = "field_%s_%s" % (type_code.lower(), func_name)
return getattr(self, name) |
Convert a field to a JSON-serializable representation.
def field_to_json(self, type_code, key, *args, **kwargs):
"""Convert a field to a JSON-serializable representation."""
assert ':' not in key
to_json = self.field_function(type_code, 'to_json')
key_and_type = "%s:%s" % (key, type_cod... |
Convert a JSON-serializable representation back to a field.
def field_from_json(self, key_and_type, json_value):
"""Convert a JSON-serializable representation back to a field."""
assert ':' in key_and_type
key, type_code = key_and_type.split(':', 1)
from_json = self.field_function(type_... |
Get the cached native representation for one or more objects.
Keyword arguments:
object_specs - A sequence of triples (model name, pk, obj):
- model name - the name of the model
- pk - the primary key of the instance
- obj - the instance, or None to load it
version - The... |
Create or update a cached instance.
Keyword arguments are:
model_name - The name of the model
pk - The primary key of the instance
instance - The Django model instance, or None to load it
versions - Version to update, or None for all
update_only - If False (default), the... |
Convert a date to a date triple.
def field_date_to_json(self, day):
"""Convert a date to a date triple."""
if isinstance(day, six.string_types):
day = parse_date(day)
return [day.year, day.month, day.day] if day else None |
Convert a UTC timestamp to a UTC datetime.
def field_datetime_from_json(self, json_val):
"""Convert a UTC timestamp to a UTC datetime."""
if type(json_val) == int:
seconds = int(json_val)
dt = datetime.fromtimestamp(seconds, utc)
elif json_val is None:
dt = N... |
Convert a datetime to a UTC timestamp w/ microsecond resolution.
datetimes w/o timezone will be assumed to be in UTC
def field_datetime_to_json(self, dt):
"""Convert a datetime to a UTC timestamp w/ microsecond resolution.
datetimes w/o timezone will be assumed to be in UTC
"""
... |
Convert json_val to a timedelta object.
json_val contains total number of seconds in the timedelta.
If json_val is a string it will be converted to a float.
def field_timedelta_from_json(self, json_val):
"""Convert json_val to a timedelta object.
json_val contains total number of seco... |
Convert timedelta to value containing total number of seconds.
If there are fractions of a second the return value will be a
string, otherwise it will be an int.
def field_timedelta_to_json(self, td):
"""Convert timedelta to value containing total number of seconds.
If there are fract... |
Load a PkOnlyQueryset from a JSON dict.
This uses the same format as cached_queryset_from_json
def field_pklist_from_json(self, data):
"""Load a PkOnlyQueryset from a JSON dict.
This uses the same format as cached_queryset_from_json
"""
model = get_model(data['app'], data['mod... |
Convert a list of primary keys to a JSON dict.
This uses the same format as cached_queryset_to_json
def field_pklist_to_json(self, model, pks):
"""Convert a list of primary keys to a JSON dict.
This uses the same format as cached_queryset_to_json
"""
app_label = model._meta.ap... |
Load a PkOnlyModel from a JSON dict.
def field_pk_from_json(self, data):
"""Load a PkOnlyModel from a JSON dict."""
model = get_model(data['app'], data['model'])
return PkOnlyModel(self, model, data['pk']) |
Convert a primary key to a JSON dict.
def field_pk_to_json(self, model, pk):
"""Convert a primary key to a JSON dict."""
app_label = model._meta.app_label
model_name = model._meta.model_name
return {
'app': app_label,
'model': model_name,
'pk': pk,
... |
Update cache when choice.voters changes.
def choice_voters_changed_update_cache(
sender, instance, action, reverse, model, pk_set, **kwargs):
"""Update cache when choice.voters changes."""
if action not in ('post_add', 'post_remove', 'post_clear'):
# post_clear is not handled, because clear is ... |
Update the cache when an instance is deleted.
def post_delete_update_cache(sender, instance, **kwargs):
"""Update the cache when an instance is deleted."""
name = sender.__name__
if name in cached_model_names:
from .tasks import update_cache_for_instance
update_cache_for_instance(name, inst... |
Update the cache when an instance is created or modified.
def post_save_update_cache(sender, instance, created, raw, **kwargs):
"""Update the cache when an instance is created or modified."""
if raw:
return
name = sender.__name__
if name in cached_model_names:
delay_cache = getattr(inst... |
Get the queryset for the action.
If action is read action, return a CachedQueryset
Otherwise, return a Django queryset
def get_queryset(self):
"""Get the queryset for the action.
If action is read action, return a CachedQueryset
Otherwise, return a Django queryset
"""
... |
Return the object the view is displaying.
Same as rest_framework.generics.GenericAPIView, but:
- Failed assertions instead of deprecations
def get_object(self, queryset=None):
"""
Return the object the view is displaying.
Same as rest_framework.generics.GenericAPIView, but:
... |
Return an object or raise a 404.
Same as Django's standard shortcut, but make sure to raise 404
if the filter_kwargs don't match the required types.
def get_object_or_404(self, queryset, *filter_args, **filter_kwargs):
"""Return an object or raise a 404.
Same as Django's standard shor... |
Resolve the object.
This returns default (if present) or fails on an Empty.
def r(self, **kwargs):
"""
Resolve the object.
This returns default (if present) or fails on an Empty.
"""
# by using kwargs we ensure that usage of positional arguments, as if
# this o... |
Resolve the object.
This will always succeed, since, if a lookup fails, an Empty
instance will be returned farther upstream.
def r(self, **kwargs):
"""
Resolve the object.
This will always succeed, since, if a lookup fails, an Empty
instance will be returned farther up... |
Update the cache for an instance, with cascading updates.
def update_cache_for_instance(
model_name, instance_pk, instance=None, version=None):
"""Update the cache for an instance, with cascading updates."""
cache = SampleCache()
invalid = cache.update_instance(model_name, instance_pk, instance, ve... |
Return the primary keys as a list.
The only valid call is values_list('pk', flat=True)
def values_list(self, *args, **kwargs):
"""Return the primary keys as a list.
The only valid call is values_list('pk', flat=True)
"""
flat = kwargs.pop('flat', False)
assert flat is ... |
Lazy-load the primary keys.
def pks(self):
"""Lazy-load the primary keys."""
if self._primary_keys is None:
self._primary_keys = list(
self.queryset.values_list('pk', flat=True))
return self._primary_keys |
Return a count of instances.
def count(self):
"""Return a count of instances."""
if self._primary_keys is None:
return self.queryset.count()
else:
return len(self.pks) |
Filter the base queryset.
def filter(self, **kwargs):
"""Filter the base queryset."""
assert not self._primary_keys
self.queryset = self.queryset.filter(**kwargs)
return self |
Return the single item from the filtered queryset.
def get(self, *args, **kwargs):
"""Return the single item from the filtered queryset."""
assert not args
assert list(kwargs.keys()) == ['pk']
pk = kwargs['pk']
model_name = self.model.__name__
object_spec = (model_name, ... |
Load all constant generators from settings.WEBPACK_CONSTANT_PROCESSORS
and concat their values.
def collect(cls):
""" Load all constant generators from settings.WEBPACK_CONSTANT_PROCESSORS
and concat their values.
"""
constants = {}
for method_path in WebpackCon... |
Validates phonenumber
Similar to phonenumber_field.validators.validate_international_phonenumber() but uses a different message if the
country prefix is absent.
def phonenumber_validation(data):
""" Validates phonenumber
Similar to phonenumber_field.validators.validate_international_phonenumber() but... |
Create Django translation catalogue for `locale`.
def get_catalog(self, locale):
"""Create Django translation catalogue for `locale`."""
with translation.override(locale):
translation_engine = DjangoTranslation(locale, domain=self.domain, localedirs=self.paths)
trans_cat = tran... |
Create list of matching packages for translation engine.
def get_paths(cls, packages):
"""Create list of matching packages for translation engine."""
allowable_packages = dict((app_config.name, app_config) for app_config in apps.get_app_configs())
app_configs = [allowable_packages[p] for p in p... |
Get `.po` header value.
def get_catalogue_header_value(cls, catalog, key):
"""Get `.po` header value."""
header_value = None
if '' in catalog:
for line in catalog[''].split('\n'):
if line.startswith('%s:' % key):
header_value = line.split(':', 1)[... |
Return the number of plurals for this catalog language, or 2 if no
plural string is available.
def _num_plurals(self, catalogue):
"""
Return the number of plurals for this catalog language, or 2 if no
plural string is available.
"""
match = re.search(r'nplurals=\s*(\d+)'... |
Populate header with correct data from top-most locale file.
def make_header(self, locale, catalog):
"""Populate header with correct data from top-most locale file."""
return {
"po-revision-date": self.get_catalogue_header_value(catalog, 'PO-Revision-Date'),
"mime-version": self... |
Collect all `domain` translations and return `Tuple[languages, locale_data]`
def collect_translations(self):
"""Collect all `domain` translations and return `Tuple[languages, locale_data]`"""
languages = {}
locale_data = {}
for language_code, label in settings.LANGUAGES:
la... |
Tiny helper function that gets used all over the place to join the object ID to the endpoint and run a GET request, returning the result
def get_endpoint_obj(client, endpoint, object_id):
''' Tiny helper function that gets used all over the place to join the object ID to the endpoint and run a GET request, returni... |
Helper method to ease the repetitiveness of updating an... SO VERY DRY
(That's a doubly-effective pun becuase my predecessor - https://github.com/bsmt/wunderpy - found maintaing a Python Wunderlist API to be "as tedious and boring as a liberal arts school poetry slam")
def update_endpoint_obj(client, endpoin... |
Helper method to validate the given to a Wunderlist API request is as expected
def _validate_response(self, method, response):
''' Helper method to validate the given to a Wunderlist API request is as expected '''
# TODO Fill this out using the error codes here: https://developer.wunderlist.com/documen... |
Send a request to the given Wunderlist API endpoint
Params:
endpoint -- API endpoint to send request to
Keyword Args:
headers -- headers to add to the request
method -- GET, PUT, PATCH, DELETE, etc.
params -- parameters to encode in the request
data -- data to s... |
Exchange a temporary code for an access token allowing access to a user's account
See https://developer.wunderlist.com/documentation/concepts/authorization for more info
def get_access_token(self, code, client_id, client_secret):
'''
Exchange a temporary code for an access token allowing acce... |
Send a request to the given Wunderlist API with 'X-Access-Token' and 'X-Client-ID' headers and ensure the response code is as expected given the request type
Params:
endpoint -- API endpoint to send request to
Keyword Args:
method -- GET, PUT, PATCH, DELETE, etc.
params -- para... |
Updates the list with the given ID to have the given title and public flag
def update_list(self, list_id, revision, title=None, public=None):
''' Updates the list with the given ID to have the given title and public flag '''
return lists_endpoint.update_list(self, list_id, revision, title=title, public... |
Gets tasks for the list with the given ID, filtered by the given completion flag
def get_tasks(self, list_id, completed=False):
''' Gets tasks for the list with the given ID, filtered by the given completion flag '''
return tasks_endpoint.get_tasks(self, list_id, completed=completed) |
Creates a new task with the given information in the list with the given ID
def create_task(self, list_id, title, assignee_id=None, completed=None, recurrence_type=None, recurrence_count=None, due_date=None, starred=None):
''' Creates a new task with the given information in the list with the given ID '''
... |
Updates the task with the given ID to have the given information
NOTE: The 'remove' parameter is an optional list of parameters to remove from the given task, e.g. ['due_date']
def update_task(self, task_id, revision, title=None, assignee_id=None, completed=None, recurrence_type=None, recurrence_coun... |
Updates the note with the given ID to have the given content
def update_note(self, note_id, revision, content):
''' Updates the note with the given ID to have the given content '''
return notes_endpoint.update_note(self, note_id, revision, content) |
Gets subtasks for task with given ID
def get_task_subtasks(self, task_id, completed=False):
''' Gets subtasks for task with given ID '''
return subtasks_endpoint.get_task_subtasks(self, task_id, completed=completed) |
Gets subtasks for the list with given ID
def get_list_subtasks(self, list_id, completed=False):
''' Gets subtasks for the list with given ID '''
return subtasks_endpoint.get_list_subtasks(self, list_id, completed=completed) |
Creates a subtask with the given title under the task with the given ID
Return:
Newly-created subtask
def create_subtask(self, task_id, title, completed=False):
'''
Creates a subtask with the given title under the task with the given ID
Return:
Newly... |
Updates the subtask with the given ID
See https://developer.wunderlist.com/documentation/endpoints/subtask for detailed parameter information
Returns:
Subtask with given ID with properties and revision updated
def update_subtask(self, subtask_id, revision, title=None, completed=None):
... |
Updates the ordering of lists to have the given value. The given ID and revision should match the singleton object defining how lists are laid out.
See https://developer.wunderlist.com/documentation/endpoints/positions for more info
Return:
The updated ListPositionsObj-mapped object defining t... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.