text stringlengths 81 112k |
|---|
Load the configuration for a single solver.
Makes a blocking web call to `{endpoint}/solvers/remote/{solver_name}/`, where `{endpoint}`
is a URL configured for the client, and returns a :class:`.Solver` instance
that can be used to submit sampling problems to the D-Wave API and retrieve results... |
Enqueue a problem for submission to the server.
This method is thread safe.
def _submit(self, body, future):
"""Enqueue a problem for submission to the server.
This method is thread safe.
"""
self._submission_queue.put(self._submit.Message(body, future)) |
Pull problems from the submission queue and submit them.
Note:
This method is always run inside of a daemon thread.
def _do_submit_problems(self):
"""Pull problems from the submission queue and submit them.
Note:
This method is always run inside of a daemon thread.
... |
Handle the results of a problem submission or results request.
This method checks the status of the problem and puts it in the correct queue.
Args:
message (dict): Update message from the SAPI server wrt. this problem.
future `Future`: future corresponding to the problem
... |
Pull ids from the cancel queue and submit them.
Note:
This method is always run inside of a daemon thread.
def _do_cancel_problems(self):
"""Pull ids from the cancel queue and submit them.
Note:
This method is always run inside of a daemon thread.
"""
t... |
Enqueue a problem to poll the server for status.
def _poll(self, future):
"""Enqueue a problem to poll the server for status."""
if future._poll_backoff is None:
# on first poll, start with minimal back-off
future._poll_backoff = self._POLL_BACKOFF_MIN
# if we have... |
Poll the server for the status of a set of problems.
Note:
This method is always run inside of a daemon thread.
def _do_poll_problems(self):
"""Poll the server for the status of a set of problems.
Note:
This method is always run inside of a daemon thread.
"""
... |
Submit a query asking for the results for a particular problem.
To request the results of a problem: ``GET /problems/{problem_id}/``
Note:
This method is always run inside of a daemon thread.
def _do_load_results(self):
"""Submit a query asking for the results for a particular pro... |
Encode the binary quadratic problem for submission to a given solver,
using the `qp` format for data.
Args:
solver (:class:`dwave.cloud.solver.Solver`):
The solver used.
linear (dict[variable, bias]/list[variable, bias]):
Linear terms of the model.
quadratic (d... |
Decode SAPI response that uses `qp` format, without numpy.
The 'qp' format is the current encoding used for problems and samples.
In this encoding the reply is generally json, but the samples, energy,
and histogram data (the occurrence count of each solution), are all
base64 encoded arrays.
def decode... |
Helper for decode_qp, turns a single byte into a list of bits.
Args:
byte: byte to be decoded
Returns:
list of bits corresponding to byte
def _decode_byte(byte):
"""Helper for decode_qp, turns a single byte into a list of bits.
Args:
byte: byte to be decoded
Returns:
... |
Helper for decode_qp, decodes an int array.
The int array is stored as little endian 32 bit integers.
The array has then been base64 encoded. Since we are decoding we do these
steps in reverse.
def _decode_ints(message):
"""Helper for decode_qp, decodes an int array.
The int array is stored as li... |
Helper for decode_qp, decodes a double array.
The double array is stored as little endian 64 bit doubles.
The array has then been base64 encoded. Since we are decoding we do these
steps in reverse.
Args:
message: the double array
Returns:
decoded double array
def _decode_doubles(... |
Decode SAPI response, results in a `qp` format, explicitly using numpy.
If numpy is not installed, the method will fail.
To use numpy for decoding, but return the results a lists (instead of
numpy matrices), set `return_matrix=False`.
def decode_qp_numpy(msg, return_matrix=True):
"""Decode SAPI respon... |
Calculate the energy of a state given the Hamiltonian.
Args:
linear: Linear Hamiltonian terms.
quad: Quadratic Hamiltonian terms.
state: Vector of spins describing the system state.
Returns:
Energy of the state evaluated by the given energy function.
def evaluate_ising(linear,... |
Calculate a set of all active qubits. Qubit is "active" if it has
bias or coupling attached.
Args:
linear (dict[variable, bias]/list[variable, bias]):
Linear terms of the model.
quadratic (dict[(variable, variable), bias]):
Quadratic terms of the model.
Returns:
... |
Generates an Ising problem formulation valid for a particular solver,
using all qubits and all couplings and linear/quadratic biases sampled
uniformly from `h_range`/`j_range`.
def generate_random_ising_problem(solver, h_range=None, j_range=None):
"""Generates an Ising problem formulation valid for a parti... |
Uniform (key, value) iteration on a `dict`,
or (idx, value) on a `list`.
def uniform_iterator(sequence):
"""Uniform (key, value) iteration on a `dict`,
or (idx, value) on a `list`."""
if isinstance(sequence, abc.Mapping):
return six.iteritems(sequence)
else:
return enumerate(sequen... |
Uniform `dict`/`list` item getter, where `index` is interpreted as a key
for maps and as numeric index for lists.
def uniform_get(sequence, index, default=None):
"""Uniform `dict`/`list` item getter, where `index` is interpreted as a key
for maps and as numeric index for lists."""
if isinstance(sequen... |
Strips elements of `values` from the beginning of `sequence`.
def strip_head(sequence, values):
"""Strips elements of `values` from the beginning of `sequence`."""
values = set(values)
return list(itertools.dropwhile(lambda x: x in values, sequence)) |
Strip `values` from the end of `sequence`.
def strip_tail(sequence, values):
"""Strip `values` from the end of `sequence`."""
return list(reversed(list(strip_head(reversed(sequence), values)))) |
Decorator to create eager Click info switch option, as described in:
http://click.pocoo.org/6/options/#callbacks-and-eager-options.
Takes a no-argument function and abstracts the boilerplate required by
Click (value checking, exit on done).
Example:
@click.option('--my-option', is_flag=True, ... |
Convert timezone-aware `datetime` to POSIX timestamp and
return seconds since UNIX epoch.
Note: similar to `datetime.timestamp()` in Python 3.3+.
def datetime_to_timestamp(dt):
"""Convert timezone-aware `datetime` to POSIX timestamp and
return seconds since UNIX epoch.
Note: similar to `datetime.... |
Return User-Agent ~ "name/version language/version interpreter/version os/version".
def user_agent(name, version):
"""Return User-Agent ~ "name/version language/version interpreter/version os/version"."""
def _interpreter():
name = platform.python_implementation()
version = platform.python_ver... |
Hash mutable arguments' containers with immutable keys and values.
def argshash(self, args, kwargs):
"Hash mutable arguments' containers with immutable keys and values."
a = repr(args)
b = repr(sorted((repr(k), repr(v)) for k, v in kwargs.items()))
return a + b |
Adds `next` to the context.
This makes sure that the `next` parameter doesn't get lost if the
form was submitted invalid.
def get_context_data(self, **kwargs):
"""
Adds `next` to the context.
This makes sure that the `next` parameter doesn't get lost if the
form was su... |
Returns the success URL.
This is either the given `next` URL parameter or the content object's
`get_absolute_url` method's return value.
def get_success_url(self):
"""
Returns the success URL.
This is either the given `next` URL parameter or the content object's
`get_a... |
Adds useful objects to the class and performs security checks.
def dispatch(self, request, *args, **kwargs):
"""Adds useful objects to the class and performs security checks."""
self._add_next_and_user(request)
self.content_object = None
self.content_type = None
self.object_id =... |
Adds useful objects to the class.
def dispatch(self, request, *args, **kwargs):
"""Adds useful objects to the class."""
self._add_next_and_user(request)
return super(DeleteImageView, self).dispatch(request, *args, **kwargs) |
Making sure that a user can only delete his own images.
Even when he forges the request URL.
def get_queryset(self):
"""
Making sure that a user can only delete his own images.
Even when he forges the request URL.
"""
queryset = super(DeleteImageView, self).get_querys... |
Adds useful objects to the class.
def dispatch(self, request, *args, **kwargs):
"""Adds useful objects to the class."""
self._add_next_and_user(request)
return super(UpdateImageView, self).dispatch(request, *args, **kwargs) |
Making sure that a user can only edit his own images.
Even when he forges the request URL.
def get_queryset(self):
"""
Making sure that a user can only edit his own images.
Even when he forges the request URL.
"""
queryset = super(UpdateImageView, self).get_queryset()... |
Deletes all user media images of the given instance.
def _delete_images(self, instance):
"""Deletes all user media images of the given instance."""
UserMediaImage.objects.filter(
content_type=ContentType.objects.get_for_model(instance),
object_id=instance.pk,
user=in... |
It seems like in Django 1.5 something has changed.
When Django tries to validate the form, it checks if the generated
filename fit into the max_length. But at this point, self.instance.user
is not yet set so our filename generation function cannot create
the new file path because it nee... |
Returns a unique filename for images.
def get_image_file_path(instance, filename):
"""Returns a unique filename for images."""
ext = filename.split('.')[-1]
filename = '%s.%s' % (uuid.uuid4(), ext)
return os.path.join(
'user_media', str(instance.user.pk), 'images', filename) |
Makes sure that a an image is also deleted from the media directory.
This should prevent a load of "dead" image files on disc.
def image_post_delete_handler(sender, instance, **kwargs):
"""
Makes sure that a an image is also deleted from the media directory.
This should prevent a load of "dead" image... |
Returns a thumbnail's coordinates.
def box_coordinates(self):
"""Returns a thumbnail's coordinates."""
if (
self.thumb_x is not None and
self.thumb_y is not None and
self.thumb_x2 is not None and
self.thumb_y2 is not None
):
return (
... |
Returns a thumbnail's large size.
def large_size(self, as_string=True):
"""Returns a thumbnail's large size."""
size = getattr(settings, 'USER_MEDIA_THUMB_SIZE_LARGE', (150, 150))
if as_string:
return u'{}x{}'.format(size[0], size[1])
return size |
Uses box coordinates to crop an image without resizing it first.
def crop_box(im, box=False, **kwargs):
"""Uses box coordinates to crop an image without resizing it first."""
if box:
im = im.crop(box)
return im |
Loads a GermaNet instance connected to the given MongoDB instance.
Arguments:
- `host`: the hostname of the MongoDB instance
- `port`: the port number of the MongoDB instance
- `database_name`: the name of the GermaNet database on the
MongoDB instance
def load_germanet(host = None, port = None, ... |
Set the cache size used to reduce the number of database
access operations.
def cache_size(self, new_value):
'''
Set the cache size used to reduce the number of database
access operations.
'''
if type(new_value) == int and 0 < new_value:
if self._lemma_cache ... |
A generator over all the lemmas in the GermaNet database.
def all_lemmas(self):
'''
A generator over all the lemmas in the GermaNet database.
'''
for lemma_dict in self._mongo_db.lexunits.find():
yield Lemma(self, lemma_dict) |
Looks up lemmas in the GermaNet database.
Arguments:
- `lemma`:
- `pos`:
def lemmas(self, lemma, pos = None):
'''
Looks up lemmas in the GermaNet database.
Arguments:
- `lemma`:
- `pos`:
'''
if pos is not None:
if pos not in ... |
A generator over all the synsets in the GermaNet database.
def all_synsets(self):
'''
A generator over all the synsets in the GermaNet database.
'''
for synset_dict in self._mongo_db.synsets.find():
yield Synset(self, synset_dict) |
Looks up synsets in the GermaNet database.
Arguments:
- `lemma`:
- `pos`:
def synsets(self, lemma, pos = None):
'''
Looks up synsets in the GermaNet database.
Arguments:
- `lemma`:
- `pos`:
'''
return sorted(set(lemma_obj.synset
... |
Looks up a synset in GermaNet using its string representation.
Arguments:
- `synset_repr`: a unicode string containing the lemma, part
of speech, and sense number of the first lemma of the synset
>>> gn.synset(u'funktionieren.v.2')
Synset(funktionieren.v.2)
def synset(self, ... |
Builds a Synset object from the database entry with the given
ObjectId.
Arguments:
- `mongo_id`: a bson.objectid.ObjectId object
def get_synset_by_id(self, mongo_id):
'''
Builds a Synset object from the database entry with the given
ObjectId.
Arguments:
... |
Builds a Lemma object from the database entry with the given
ObjectId.
Arguments:
- `mongo_id`: a bson.objectid.ObjectId object
def get_lemma_by_id(self, mongo_id):
'''
Builds a Lemma object from the database entry with the given
ObjectId.
Arguments:
- ... |
Tries to find the base form (lemma) of the given word, using
the data provided by the Projekt deutscher Wortschatz. This
method returns a list of potential lemmas.
>>> gn.lemmatise(u'Männer')
[u'Mann']
>>> gn.lemmatise(u'XYZ123')
[u'XYZ123']
def lemmatise(self, word):
... |
Globs the XML files contained in the given directory and sorts
them into sections for import into the MongoDB database.
Arguments:
- `xml_path`: the path to the directory containing the GermaNet
XML files
def find_germanet_xml_files(xml_path):
'''
Globs the XML files contained in the given d... |
Error checking of XML input: check that the given node has certain
required attributes, and does not have any unrecognised
attributes.
Arguments:
- `loc`: a string with some information about the location of the
error in the XML file
- `node`: the node to check
- `recognised_attribs`: a s... |
Reads in a GermaNet lexical information file and returns its
contents as a list of dictionary structures.
Arguments:
- `filename`: the name of the XML file to read
def read_lexical_file(filename):
'''
Reads in a GermaNet lexical information file and returns its
contents as a list of dictionary... |
Reads the GermaNet relation file ``gn_relations.xml`` which lists
all the relations holding between lexical units and synsets.
Arguments:
- `filename`:
def read_relation_file(filename):
'''
Reads the GermaNet relation file ``gn_relations.xml`` which lists
all the relations holding between lexi... |
Reads in a GermaNet wiktionary paraphrase file and returns its
contents as a list of dictionary structures.
Arguments:
- `filename`:
def read_paraphrase_file(filename):
'''
Reads in a GermaNet wiktionary paraphrase file and returns its
contents as a list of dictionary structures.
Argument... |
Reads in the given lexical information files and inserts their
contents into the given MongoDB database.
Arguments:
- `germanet_db`: a pymongo.database.Database object
- `lex_files`: a list of paths to XML files containing lexial
information
def insert_lexical_information(germanet_db, lex_files)... |
Reads in the given GermaNet relation file and inserts its contents
into the given MongoDB database.
Arguments:
- `germanet_db`: a pymongo.database.Database object
- `gn_rels_file`:
def insert_relation_information(germanet_db, gn_rels_file):
'''
Reads in the given GermaNet relation file and ins... |
Reads in the given GermaNet relation file and inserts its contents
into the given MongoDB database.
Arguments:
- `germanet_db`: a pymongo.database.Database object
- `wiktionary_files`:
def insert_paraphrase_information(germanet_db, wiktionary_files):
'''
Reads in the given GermaNet relation fi... |
Creates the lemmatiser collection in the given MongoDB instance
using the data derived from the Projekt deutscher Wortschatz.
Arguments:
- `germanet_db`: a pymongo.database.Database object
def insert_lemmatisation_data(germanet_db):
'''
Creates the lemmatiser collection in the given MongoDB instan... |
For every synset in GermaNet, inserts count information derived
from SDEWAC.
Arguments:
- `germanet_db`: a pymongo.database.Database object
def insert_infocontent_data(germanet_db):
'''
For every synset in GermaNet, inserts count information derived
from SDEWAC.
Arguments:
- `germanet... |
For every part of speech in GermaNet, computes the maximum
min_depth in that hierarchy.
Arguments:
- `germanet_db`: a pymongo.database.Database object
def compute_max_min_depth(germanet_db):
'''
For every part of speech in GermaNet, computes the maximum
min_depth in that hierarchy.
Argume... |
Main function.
def main():
'''Main function.'''
usage = ('\n\n %prog [options] XML_PATH\n\nArguments:\n\n '
'XML_PATH the directory containing the '
'GermaNet .xml files')
parser = optparse.OptionParser(usage=usage)
parser.add_option('--host', default=None,
... |
in
def handle_In(self, node):
'''in'''
try:
elts = node.elts
except AttributeError:
raise ParseError('Invalid value type for `in` operator: {0}'.format(node.__class__.__name__),
col_offset=node.col_offset)
return {'$in': list(map(self... |
Assign a value to this Value.
def AssignVar(self, value):
"""Assign a value to this Value."""
self.value = value
# Call OnAssignVar on options.
[option.OnAssignVar() for option in self.options] |
Parse a 'Value' declaration.
Args:
value: String line from a template file, must begin with 'Value '.
Raises:
TextFSMTemplateError: Value declaration contains an error.
def Parse(self, value):
"""Parse a 'Value' declaration.
Args:
value: String line from a template file, must begin... |
Passes the line through each rule until a match is made.
Args:
line: A string, the current input line.
def _CheckLine(self, line):
"""Passes the line through each rule until a match is made.
Args:
line: A string, the current input line.
"""
for rule in self._cur_state:
matched =... |
Make graphite-api data points dictionary from Influxdb ResultSet data
def _make_graphite_api_points_list(influxdb_data):
"""Make graphite-api data points dictionary from Influxdb ResultSet data"""
_data = {}
for key in influxdb_data.keys():
_data[key[0]] = [(datetime.datetime.fromtimestamp(float(d[... |
Setup log level and log file if set
def _setup_logger(self, level, log_file):
"""Setup log level and log file if set"""
if logger.handlers:
return
level = getattr(logging, level.upper())
logger.setLevel(level)
formatter = logging.Formatter(
'[%(levelname)... |
Turn glob (graphite) queries into compiled regex
* becomes .*
. becomes \.
fmt argument is so that caller can control anchoring (must contain exactly 1 {0} !
def compile_regex(self, fmt, query):
"""Turn glob (graphite) queries into compiled regex
* becomes .*
. becomes \... |
Gets an <expression> and optional <schema>.
<expression> should be a string of python code.
<schema> should be a dictionary mapping field names to types.
def find(expression, schema=None):
'''
Gets an <expression> and optional <schema>.
<expression> should be a string of python code.
<schema> s... |
Gets a list of <fields> to sort by.
Also supports getting a single string for sorting by one field.
Reverse sort is supported by appending '-' to the field name.
Example: sort(['age', '-height']) will sort by ascending age and descending height.
def sort(fields):
'''
Gets a list of <fields> to sort... |
Return sorted version of nested dicts/lists for comparing.
Modified from:
http://stackoverflow.com/a/25851972
def ordered(obj):
"""
Return sorted version of nested dicts/lists for comparing.
Modified from:
http://stackoverflow.com/a/25851972
"""
if isinstance(obj, collections.abc.Mapp... |
Does not yield the input class
def walk_subclasses(root):
"""Does not yield the input class"""
classes = [root]
visited = set()
while classes:
cls = classes.pop()
if cls is type or cls in visited:
continue
classes.extend(cls.__subclasses__())
visited.add(cls)... |
dump the hash (and range, if there is one) key(s) of an object into
a dynamo-friendly format.
returns {dynamo_name: {type: value} for dynamo_name in hash/range keys}
def dump_key(engine, obj):
"""dump the hash (and range, if there is one) key(s) of an object into
a dynamo-friendly format.
returns... |
Return an expiration `days` in the future
def new_expiry(days=DEFAULT_PASTE_LIFETIME_DAYS):
"""Return an expiration `days` in the future"""
now = delorean.Delorean()
return now + datetime.timedelta(days=days) |
Mark the object as having been persisted at least once.
Store the latest snapshot of all marked values.
def sync(obj, engine):
"""Mark the object as having been persisted at least once.
Store the latest snapshot of all marked values."""
snapshot = Condition()
# Only expect values (or lack of a va... |
Provided for debug output when rendering conditions.
User.name[3]["foo"][0]["bar"] -> name[3].foo[0].bar
def printable_name(column, path=None):
"""Provided for debug output when rendering conditions.
User.name[3]["foo"][0]["bar"] -> name[3].foo[0].bar
"""
pieces = [column.name]
path = path or... |
Yield all conditions within the given condition.
If the root condition is and/or/not, it is not yielded (unless a cyclic reference to it is found).
def iter_conditions(condition):
"""Yield all conditions within the given condition.
If the root condition is and/or/not, it is not yielded (unless a cyclic r... |
Yield all columns in the condition or its inner conditions.
Unwraps proxies when the condition's column (or any of its values) include paths.
def iter_columns(condition):
"""
Yield all columns in the condition or its inner conditions.
Unwraps proxies when the condition's column (or any of its values)... |
inner=True uses column.typedef.inner_type instead of column.typedef
def _value_ref(self, column, value, *, dumped=False, inner=False):
"""inner=True uses column.typedef.inner_type instead of column.typedef"""
ref = ":v{}".format(self.next_index)
# Need to dump this value
if not dumped:... |
Returns a NamedTuple of (name, type, value) for any type of reference.
.. code-block:: python
# Name ref
>>> tracker.any_ref(column=User.email)
Reference(name='email', type='name', value=None)
# Value ref
>>> tracker.any_ref(column=User.email, value... |
Decrement the usage of each ref by 1.
If this was the last use of a ref, remove it from attr_names or attr_values.
def pop_refs(self, *refs):
"""Decrement the usage of each ref by 1.
If this was the last use of a ref, remove it from attr_names or attr_values.
"""
for ref in re... |
Main entry point for rendering multiple expressions. All parameters are optional, except obj when
atomic or update are True.
:param obj: *(Optional)* An object to render an atomic condition or update expression for. Required if
update or atomic are true. Default is False.
:param ... |
The rendered wire format for all conditions that have been rendered. Rendered conditions are never
cleared. A new :class:`~bloop.conditions.ConditionRenderer` should be used for each operation.
def rendered(self):
"""The rendered wire format for all conditions that have been rendered. Rendered condi... |
Replaces the attr dict at the given key with an instance of a Model
def _unpack(self, record, key, expected):
"""Replaces the attr dict at the given key with an instance of a Model"""
attrs = record.get(key)
if attrs is None:
return
obj = unpack_from_dynamodb(
at... |
Repack a record into a cleaner structure for consumption.
def reformat_record(record):
"""Repack a record into a cleaner structure for consumption."""
return {
"key": record["dynamodb"].get("Keys", None),
"new": record["dynamodb"].get("NewImage", None),
"old": record["dynamodb"].get("Ol... |
List[Dict] -> Dict[shard_id, Shard].
Each Shards' parent/children are hooked up with the other Shards in the list.
def unpack_shards(shards, stream_arn, session):
"""List[Dict] -> Dict[shard_id, Shard].
Each Shards' parent/children are hooked up with the other Shards in the list.
"""
if not shard... |
JSON-serializable representation of the current Shard state.
The token is enough to rebuild the Shard as part of rebuilding a Stream.
:returns: Shard state as a json-friendly dict
:rtype: dict
def token(self):
"""JSON-serializable representation of the current Shard state.
Th... |
Generator that yields each :class:`~bloop.stream.shard.Shard` by walking the shard's children in order.
def walk_tree(self):
"""Generator that yields each :class:`~bloop.stream.shard.Shard` by walking the shard's children in order."""
shards = collections.deque([self])
while shards:
... |
Move to a new position in the shard using the standard parameters to GetShardIterator.
:param str iterator_type: "trim_horizon", "at_sequence", "after_sequence", "latest"
:param str sequence_number: *(Optional)* Sequence number to use with at/after sequence. Default is None.
def jump_to(self, *, iter... |
Move the Shard's iterator to the earliest record after the :class:`~datetime.datetime` time.
Returns the first records at or past ``position``. If the list is empty,
the seek failed to find records, either because the Shard is exhausted or it
reached the HEAD of an open Shard.
:param ... |
If the Shard doesn't have any children, tries to find some from DescribeStream.
If the Shard is open this won't find any children, so an empty response doesn't
mean the Shard will **never** have children.
def load_children(self):
"""If the Shard doesn't have any children, tries to find some fr... |
Get the next set of records in this shard. An empty list doesn't guarantee the shard is exhausted.
:returns: A list of reformatted records. May be empty.
def get_records(self):
"""Get the next set of records in this shard. An empty list doesn't guarantee the shard is exhausted.
:returns: A... |
Create backing tables for a model and its non-abstract subclasses.
:param model: Base model to bind. Can be abstract.
:param skip_table_setup: Don't create or verify the table in DynamoDB. Default is False.
:raises bloop.exceptions.InvalidModel: if ``model`` is not a subclass of :class:`~bloo... |
Delete one or more objects.
:param objs: objects to delete.
:param condition: only perform each delete if this condition holds.
:param bool atomic: only perform each delete if the local and DynamoDB versions of the object match.
:raises bloop.exceptions.ConstraintViolation: if the condi... |
Populate objects from DynamoDB.
:param objs: objects to delete.
:param bool consistent: Use `strongly consistent reads`__ if True. Default is False.
:raises bloop.exceptions.MissingKey: if any object doesn't provide a value for a key column.
:raises bloop.exceptions.MissingObjects: if ... |
Create a reusable :class:`~bloop.search.QueryIterator`.
:param model_or_index: A model or index to query. For example, ``User`` or ``User.by_email``.
:param key:
Key condition. This must include an equality against the hash key, and optionally one
of a restricted set of condit... |
Save one or more objects.
:param objs: objects to save.
:param condition: only perform each save if this condition holds.
:param bool atomic: only perform each save if the local and DynamoDB versions of the object match.
:raises bloop.exceptions.ConstraintViolation: if the condition (or... |
Create a reusable :class:`~bloop.search.ScanIterator`.
:param model_or_index: A model or index to scan. For example, ``User`` or ``User.by_email``.
:param filter: Filter condition. Only matching objects will be included in the results.
:param projection:
"all", "count", a list of ... |
Create a :class:`~bloop.stream.Stream` that provides approximate chronological ordering.
.. code-block:: pycon
# Create a user so we have a record
>>> engine = Engine()
>>> user = User(id=3, email="user@domain.com")
>>> engine.save(user)
>>> user.ema... |
Create a new :class:`~bloop.transactions.ReadTransaction` or :class:`~bloop.transactions.WriteTransaction`.
As a context manager, calling commit when the block exits:
.. code-block:: pycon
>>> engine = Engine()
>>> user = User(id=3, email="user@domain.com")
>>> twe... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.