text stringlengths 81 112k |
|---|
Eventual consistency: wait until GCS reports something is true.
This is necessary for e.g. create/delete where the operation might return,
but won't be reflected for a bit.
def _wait_for_consistency(checker):
"""Eventual consistency: wait until GCS reports something is true.
This is necessary for e.g... |
Rename/move an object from one GCS location to another.
def move(self, source_path, destination_path):
"""
Rename/move an object from one GCS location to another.
"""
self.copy(source_path, destination_path)
self.remove(source_path) |
Get an iterable with GCS folder contents.
Iterable contains paths relative to queried path.
def listdir(self, path):
"""
Get an iterable with GCS folder contents.
Iterable contains paths relative to queried path.
"""
bucket, obj = self._path_to_bucket_and_key(path)
... |
Yields full object URIs matching the given wildcard.
Currently only the '*' wildcard after the last path delimiter is supported.
(If we need "full" wildcard functionality we should bring in gsutil dependency with its
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/wildcard_iter... |
Downloads the object contents to local file system.
Optionally stops after the first chunk for which chunk_callback returns True.
def download(self, path, chunksize=None, chunk_callback=lambda _: False):
"""Downloads the object contents to local file system.
Optionally stops after the first c... |
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009-07-02-python-sigpipe.html
def create_subprocess(self, command):
"""
http://www.chiark.greenend.org.uk/ucgi/~cjwatson/blosxom/2009-07-02-python-sigpipe.html
"""
def subprocess_setup():
# Python installs a SIGPIPE ... |
Closes and waits for subprocess to exit.
def _finish(self):
"""
Closes and waits for subprocess to exit.
"""
if self._process.returncode is None:
self._process.stdin.flush()
self._process.stdin.close()
self._process.wait()
self.closed = Tr... |
Checks if task is complete, puts the result to out_queue.
def check_complete(task, out_queue):
"""
Checks if task is complete, puts the result to out_queue.
"""
logger.debug("Checking if %s is complete", task)
try:
is_complete = task.complete()
except Exception:
is_complete = Tr... |
Call ``self._scheduler.add_task``, but store the values too so we can
implement :py:func:`luigi.execution_summary.summary`.
def _add_task(self, *args, **kwargs):
"""
Call ``self._scheduler.add_task``, but store the values too so we can
implement :py:func:`luigi.execution_summary.summary... |
Add a Task for the worker to check and possibly schedule and run.
Returns True if task and its dependencies were successfully scheduled or completed before.
def add(self, task, multiprocess=False, processes=0):
"""
Add a Task for the worker to check and possibly schedule and run.
Retu... |
Find dead children and put a response on the result queue.
:return:
def _purge_children(self):
"""
Find dead children and put a response on the result queue.
:return:
"""
for task_id, p in six.iteritems(self._running_tasks):
if not p.is_alive() and p.exitco... |
We have to catch three ways a task can be "done":
1. normal execution: the task runs/fails and puts a result back on the queue,
2. new dependencies: the task yielded new deps that were not complete and
will be rescheduled and dependencies added,
3. child process dies: we need to catc... |
Returns true if a worker should stay alive given.
If worker-keep-alive is not set, this will always return false.
For an assistant, it will always return the value of worker-keep-alive.
Otherwise, it will return true for nonzero n_pending_tasks.
If worker-count-uniques is true, it will... |
Returns True if all scheduled tasks were executed successfully.
def run(self):
"""
Returns True if all scheduled tasks were executed successfully.
"""
logger.info('Running Worker with %d processes', self.worker_processes)
sleeper = self._sleeper()
self.run_succeeded = T... |
Ensure the database schema is up to date with the codebase.
:param engine: SQLAlchemy engine of the underlying database.
def _upgrade_schema(engine):
"""
Ensure the database schema is up to date with the codebase.
:param engine: SQLAlchemy engine of the underlying database.
"""
inspector = re... |
Find tasks with the given task_name and the same parameters as the kwargs.
def find_all_by_parameters(self, task_name, session=None, **task_params):
"""
Find tasks with the given task_name and the same parameters as the kwargs.
"""
with self._session(session) as session:
que... |
Return all tasks that have been updated.
def find_all_runs(self, session=None):
"""
Return all tasks that have been updated.
"""
with self._session(session) as session:
return session.query(TaskRecord).all() |
Return all running/failed/done events.
def find_all_events(self, session=None):
"""
Return all running/failed/done events.
"""
with self._session(session) as session:
return session.query(TaskEvent).all() |
Find task with the given record ID.
def find_task_by_id(self, id, session=None):
"""
Find task with the given record ID.
"""
with self._session(session) as session:
return session.query(TaskRecord).get(id) |
Returns a dictionary with keyword arguments for use with discovery
Prioritizes oauth_credentials or a http client provided by the user
If none provided, falls back to default credentials provided by google's command line
utilities. If that also fails, tries using httplib2.Http()
Used by `gcs.GCSClient... |
Return a credential string for the provided task. If no valid
credentials are set, raise a NotImplementedError.
def _credentials(self):
"""
Return a credential string for the provided task. If no valid
credentials are set, raise a NotImplementedError.
"""
if self.aws_ac... |
Return True if prune_table, prune_column, and prune_date are implemented.
If only a subset of prune variables are override, an exception is raised to remind the user to implement all or none.
Prune (data newer than prune_date deleted) before copying new data in.
def do_prune(self):
"""
... |
Will create the schema in the database
def create_schema(self, connection):
"""
Will create the schema in the database
"""
if '.' not in self.table:
return
query = 'CREATE SCHEMA IF NOT EXISTS {schema_name};'.format(schema_name=self.table.split('.')[0])
conn... |
Override to provide code for creating the target table.
By default it will be created using types (optionally)
specified in columns.
If overridden, use the provided connection object for
setting up the table in order to create the table and
insert data using the same transactio... |
If the target table doesn't exist, self.create_table
will be called to attempt to create the table.
def run(self):
"""
If the target table doesn't exist, self.create_table
will be called to attempt to create the table.
"""
if not (self.table):
raise Exception... |
Defines copying from s3 into redshift.
If both key-based and role-based credentials are provided, role-based will be used.
def copy(self, cursor, f):
"""
Defines copying from s3 into redshift.
If both key-based and role-based credentials are provided, role-based will be used.
... |
Determine whether the schema already exists.
def does_schema_exist(self, connection):
"""
Determine whether the schema already exists.
"""
if '.' in self.table:
query = ("select 1 as schema_exists "
"from pg_namespace "
"where nspna... |
Determine whether the table already exists.
def does_table_exist(self, connection):
"""
Determine whether the table already exists.
"""
if '.' in self.table:
query = ("select 1 as table_exists "
"from information_schema.tables "
"wh... |
Perform pre-copy sql - such as creating table, truncating, or removing data older than x.
def init_copy(self, connection):
"""
Perform pre-copy sql - such as creating table, truncating, or removing data older than x.
"""
if not self.does_schema_exist(connection):
logger.info... |
Performs post-copy sql - such as cleansing data, inserting into production table (if copied to temp table), etc.
def post_copy(self, cursor):
"""
Performs post-copy sql - such as cleansing data, inserting into production table (if copied to temp table), etc.
"""
logger.info('Executing p... |
Performs post-copy to fill metadata columns.
def post_copy_metacolums(self, cursor):
"""
Performs post-copy to fill metadata columns.
"""
logger.info('Executing post copy metadata queries')
for query in self.metadata_queries:
cursor.execute(query) |
Defines copying JSON from s3 into redshift.
def copy(self, cursor, f):
"""
Defines copying JSON from s3 into redshift.
"""
logger.info("Inserting file: %s", f)
cursor.execute("""
COPY %s from '%s'
CREDENTIALS '%s'
JSON AS '%s' %s
%s
... |
Returns a RedshiftTarget representing the inserted dataset.
Normally you don't override this.
def output(self):
"""
Returns a RedshiftTarget representing the inserted dataset.
Normally you don't override this.
"""
# uses class name as a meta-table
return Redshi... |
Kill any open Redshift sessions for the given database.
def run(self):
"""
Kill any open Redshift sessions for the given database.
"""
connection = self.output().connect()
# kill any sessions other than ours and
# internal Redshift sessions (rdsdb)
query = ("sele... |
Returns a list of dates in this date interval.
def dates(self):
''' Returns a list of dates in this date interval.'''
dates = []
d = self.date_a
while d < self.date_b:
dates.append(d)
d += datetime.timedelta(1)
return dates |
Same as dates() but returns 24 times more info: one for each hour.
def hours(self):
''' Same as dates() but returns 24 times more info: one for each hour.'''
for date in self.dates():
for hour in xrange(24):
yield datetime.datetime.combine(date, datetime.time(hour)) |
The execution of this task will write 4 lines of data on this task's target output.
def run(self):
"""
The execution of this task will write 4 lines of data on this task's target output.
"""
with self.output().open('w') as outfile:
print("data 0 200 10 50 60", file=outfile)
... |
Copies the contents of a single file path to dest
def copy(self, path, dest, raise_if_exists=False):
"""
Copies the contents of a single file path to dest
"""
if raise_if_exists and dest in self.get_all_data():
raise RuntimeError('Destination exists: %s' % path)
cont... |
Removes the given mockfile. skip_trash doesn't have any meaning.
def remove(self, path, recursive=True, skip_trash=True):
"""
Removes the given mockfile. skip_trash doesn't have any meaning.
"""
if recursive:
to_delete = []
for s in self.get_all_data().keys():
... |
Moves a single file from path to dest
def move(self, path, dest, raise_if_exists=False):
"""
Moves a single file from path to dest
"""
if raise_if_exists and dest in self.get_all_data():
raise RuntimeError('Destination exists: %s' % path)
contents = self.get_all_data... |
listdir does a prefix match of self.get_all_data(), but doesn't yet support globs.
def listdir(self, path):
"""
listdir does a prefix match of self.get_all_data(), but doesn't yet support globs.
"""
return [s for s in self.get_all_data().keys()
if s.startswith(path)] |
Call MockFileSystem's move command
def move(self, path, raise_if_exists=False):
"""
Call MockFileSystem's move command
"""
self.fs.move(self.path, path, raise_if_exists) |
Recursively walks ``Mapping``s and ``list``s and converts them to ``_FrozenOrderedDict`` and ``tuples``, respectively.
def _recursively_freeze(value):
"""
Recursively walks ``Mapping``s and ``list``s and converts them to ``_FrozenOrderedDict`` and ``tuples``, respectively.
"""
if isinstance(value, Mapp... |
Loads the default from the config. Returns _no_value if it doesn't exist
def _get_value_from_config(self, section, name):
"""Loads the default from the config. Returns _no_value if it doesn't exist"""
conf = configuration.get_config()
try:
value = conf.get(section, name)
e... |
Yield the parameter values, with optional deprecation warning as second tuple value.
The parameter value will be whatever non-_no_value that is yielded first.
def _value_iterator(self, task_name, param_name):
"""
Yield the parameter values, with optional deprecation warning as second tuple val... |
Parse a list of values from the scheduler.
Only possible if this is_batchable() is True. This will combine the list into a single
parameter value using batch method. This should never need to be overridden.
:param xs: list of values to parse and combine
:return: the combined parsed val... |
Parses a date string formatted like ``YYYY-MM-DD``.
def parse(self, s):
"""
Parses a date string formatted like ``YYYY-MM-DD``.
"""
return datetime.datetime.strptime(s, self.date_format).date() |
Converts the date to a string using the :py:attr:`~_DateParameterBase.date_format`.
def serialize(self, dt):
"""
Converts the date to a string using the :py:attr:`~_DateParameterBase.date_format`.
"""
if dt is None:
return str(dt)
return dt.strftime(self.date_format) |
Add ``months`` months to ``date``.
Unfortunately we can't use timedeltas to add months because timedelta counts in days
and there's no foolproof way to add N months in days without counting the number of
days per month.
def _add_months(self, date, months):
"""
Add ``months`` mo... |
Clamp dt to every Nth :py:attr:`~_DatetimeParameterBase.interval` starting at
:py:attr:`~_DatetimeParameterBase.start`.
def normalize(self, dt):
"""
Clamp dt to every Nth :py:attr:`~_DatetimeParameterBase.interval` starting at
:py:attr:`~_DatetimeParameterBase.start`.
"""
... |
Parses a ``bool`` from the string, matching 'true' or 'false' ignoring case.
def parse(self, val):
"""
Parses a ``bool`` from the string, matching 'true' or 'false' ignoring case.
"""
s = str(val).lower()
if s == "true":
return True
elif s == "false":
... |
Parses a :py:class:`~luigi.date_interval.DateInterval` from the input.
see :py:mod:`luigi.date_interval`
for details on the parsing of DateIntervals.
def parse(self, s):
"""
Parses a :py:class:`~luigi.date_interval.DateInterval` from the input.
see :py:mod:`luigi.date_interv... |
Parses a time delta from the input.
See :py:class:`TimeDeltaParameter` for details on supported formats.
def parse(self, input):
"""
Parses a time delta from the input.
See :py:class:`TimeDeltaParameter` for details on supported formats.
"""
result = self._parseIso8601... |
Converts datetime.timedelta to a string
:param x: the value to serialize.
def serialize(self, x):
"""
Converts datetime.timedelta to a string
:param x: the value to serialize.
"""
weeks = x.days // 7
days = x.days % 7
hours = x.seconds // 3600
m... |
Parse an individual value from the input.
:param str x: the value to parse.
:return: the parsed value.
def parse(self, x):
"""
Parse an individual value from the input.
:param str x: the value to parse.
:return: the parsed value.
"""
# Since the result ... |
1. count the words for each of the :py:meth:`~.InputText.output` targets created by :py:class:`~.InputText`
2. write the count into the :py:meth:`~.WordCount.output` target
def run(self):
"""
1. count the words for each of the :py:meth:`~.InputText.output` targets created by :py:class:`~.InputT... |
Create a tar archive which will contain the files for the packages listed in packages.
def create_packages_archive(packages, filename):
"""
Create a tar archive which will contain the files for the packages listed in packages.
"""
import tarfile
tar = tarfile.open(filename, "w")
def add(src, d... |
A simple generator which flattens a sequence.
Only one level is flattened.
.. code-block:: python
(1, (2, 3), 4) -> (1, 2, 3, 4)
def flatten(sequence):
"""
A simple generator which flattens a sequence.
Only one level is flattened.
.. code-block:: python
(1, (2, 3), 4) -> (... |
Runs the job by invoking the command from the given arglist.
Finds tracking urls from the output and attempts to fetch errors using those urls if the job fails.
Throws HadoopJobError with information about the error
(including stdout and stderr from the process)
on failure and returns normally otherwise... |
Uses mechanize to fetch the actual task logs from the task tracker.
This is highly opportunistic, and we might not succeed.
So we set a low timeout and hope it works.
If it does not, it's not the end of the world.
TODO: Yarn has a REST API that we should probably use instead:
http://hadoop.apache.... |
Protected method
def _get_pool(self):
""" Protected method """
if self.pool:
return self.pool
if hadoop().pool:
return hadoop().pool |
Get the MapReduce runner for this job.
If all outputs are HdfsTargets, the DefaultHadoopJobRunner will be used.
Otherwise, the LocalJobRunner which streams all data through the local machine
will be used (great for testing).
def job_runner(self):
# We recommend that you define a subcla... |
Writer format is a method which iterates over the output records
from the reducer and formats them for output.
The default implementation outputs tab separated items.
def writer(self, outputs, stdout, stderr=sys.stderr):
"""
Writer format is a method which iterates over the output reco... |
Increments a Hadoop counter.
Since counters can be a bit slow to update, this batches the updates.
def incr_counter(self, *args, **kwargs):
"""
Increments a Hadoop counter.
Since counters can be a bit slow to update, this batches the updates.
"""
threshold = kwargs.get... |
Increments any unflushed counter values.
def _flush_batch_incr_counter(self):
"""
Increments any unflushed counter values.
"""
for key, count in six.iteritems(self._counter_dict):
if count == 0:
continue
args = list(key) + [count]
self... |
Increments a Hadoop counter.
Note that this seems to be a bit slow, ~1 ms
Don't overuse this function by updating very frequently.
def _incr_counter(self, *args):
"""
Increments a Hadoop counter.
Note that this seems to be a bit slow, ~1 ms
Don't overuse this functio... |
Dump instance to file.
def dump(self, directory=''):
"""
Dump instance to file.
"""
with self.no_unpicklable_properties():
file_name = os.path.join(directory, 'job-instance.pickle')
if self.__module__ == '__main__':
d = pickle.dumps(self)
... |
Iterate over input and call the mapper for each item.
If the job has a parser defined, the return values from the parser will
be passed as arguments to the mapper.
If the input is coded output from a previous run,
the arguments will be splitted in key and value.
def _map_input(self, in... |
Iterate over input, collect values with the same key, and call the reducer for each unique key.
def _reduce_input(self, inputs, reducer, final=NotImplemented):
"""
Iterate over input, collect values with the same key, and call the reducer for each unique key.
"""
for key, values in grou... |
Run the mapper on the hadoop node.
def run_mapper(self, stdin=sys.stdin, stdout=sys.stdout):
"""
Run the mapper on the hadoop node.
"""
self.init_hadoop()
self.init_mapper()
outputs = self._map_input((line[:-1] for line in stdin))
if self.reducer == NotImplemente... |
Run the reducer on the hadoop node.
def run_reducer(self, stdin=sys.stdin, stdout=sys.stdout):
"""
Run the reducer on the hadoop node.
"""
self.init_hadoop()
self.init_reducer()
outputs = self._reduce_input(self.internal_reader((line[:-1] for line in stdin)), self.reduce... |
Reader which uses python eval on each part of a tab separated string.
Yields a tuple of python objects.
def internal_reader(self, input_stream):
"""
Reader which uses python eval on each part of a tab separated string.
Yields a tuple of python objects.
"""
for input_line... |
Writer which outputs the python repr for each item.
def internal_writer(self, outputs, stdout):
"""
Writer which outputs the python repr for each item.
"""
for output in outputs:
print("\t".join(map(self.internal_serialize, output)), file=stdout) |
Mark this update as complete.
Important: If the marker table doesn't exist, the connection transaction will be aborted
and the connection reset.
Then the marker table will be created.
def touch(self, connection=None):
"""
Mark this update as complete.
Important: If the... |
Get a psycopg2 connection object to the database where the table is.
def connect(self):
"""
Get a psycopg2 connection object to the database where the table is.
"""
connection = psycopg2.connect(
host=self.host,
port=self.port,
database=self.database,... |
Create marker table if it doesn't exist.
Using a separate connection since the transaction might have to be reset.
def create_marker_table(self):
"""
Create marker table if it doesn't exist.
Using a separate connection since the transaction might have to be reset.
"""
... |
Return/yield tuples or lists corresponding to each row to be inserted.
def rows(self):
"""
Return/yield tuples or lists corresponding to each row to be inserted.
"""
with self.input().open('r') as fobj:
for line in fobj:
yield line.strip('\n').split('\t') |
Applied to each column of every row returned by `rows`.
Default behaviour is to escape special characters and identify any self.null_values.
def map_column(self, value):
"""
Applied to each column of every row returned by `rows`.
Default behaviour is to escape special characters and i... |
Returns a PostgresTarget representing the inserted dataset.
Normally you don't override this.
def output(self):
"""
Returns a PostgresTarget representing the inserted dataset.
Normally you don't override this.
"""
return PostgresTarget(
host=self.host,
... |
Inserts data generated by rows() into target table.
If the target table doesn't exist, self.create_table will be called to attempt to create the table.
Normally you don't want to override this.
def run(self):
"""
Inserts data generated by rows() into target table.
If the targ... |
Get configs singleton for parser
def get_config(parser=PARSER):
"""Get configs singleton for parser
"""
parser_class = PARSERS[parser]
_check_parser(parser_class, parser)
return parser_class.instance() |
Select config parser by file extension and add path into parser.
def add_config_path(path):
"""Select config parser by file extension and add path into parser.
"""
if not os.path.isfile(path):
warnings.warn("Config file does not exist: {path}".format(path=path))
return False
# select p... |
This method compresses and uploads packages to the cluster
def _setup_packages(self, sc):
"""
This method compresses and uploads packages to the cluster
"""
packages = self.py_packages
if not packages:
return
for package in packages:
mod = import... |
Run either the mapper, combiner, or reducer from the class instance in the file "job-instance.pickle".
Arguments:
kind -- is either map, combiner, or reduce
def main(args=None, stdin=sys.stdin, stdout=sys.stdout, print_exception=print_exception):
"""
Run either the mapper, combiner, or reducer from t... |
Run luigi with command line parsing, but raise ``SystemExit`` with the configured exit code.
Note: Usually you use the luigi binary directly and don't call this function yourself.
:param argv: Should (conceptually) be ``sys.argv[1:]``
def run_with_retcodes(argv):
"""
Run luigi with command line parsi... |
Finds all tasks on all paths from provided CLI task
def find_deps_cli():
'''
Finds all tasks on all paths from provided CLI task
'''
cmdline_args = sys.argv[1:]
with CmdlineParser.global_instance(cmdline_args) as cp:
return find_deps(cp.get_task_obj(), upstream().family) |
Returns a task's output as a string
def get_task_output_description(task_output):
'''
Returns a task's output as a string
'''
output_description = "n/a"
if isinstance(task_output, RemoteTarget):
output_description = "[SSH] {0}:{1}".format(task_output._fs.remote_context.host, task_output.pa... |
Tweaks glob into a list of more specific globs that together still cover paths and not too much extra.
Saves us minutes long listings for long dataset histories.
Specifically, in this implementation the leftmost occurrences of "[0-9]"
give rise to a few separate globs that each specialize the expression t... |
Wanted functionality from Counters (new in Python 2.7).
def most_common(items):
"""
Wanted functionality from Counters (new in Python 2.7).
"""
counts = {}
for i in items:
counts.setdefault(i, 0)
counts[i] += 1
return max(six.iteritems(counts), key=operator.itemgetter(1)) |
Builds a glob listing existing output paths.
Esoteric reverse engineering, but worth it given that (compared to an
equivalent contiguousness guarantee by naive complete() checks)
requests to the filesystem are cut by orders of magnitude, and users
don't even have to retrofit existing tasks anyhow.
def... |
Yields a (filesystem, glob) tuple per every output location of task.
The task can have one or several FileSystemTarget outputs.
For convenience, the task can be a luigi.WrapperTask,
in which case outputs of all its dependencies are considered.
def _get_filesystems_and_globs(datetime_to_task, datetime_to_... |
Get all the paths that do in fact exist. Returns a set of all existing paths.
Takes a luigi.target.FileSystem object, a str which represents a glob and
a list of strings representing paths.
def _list_existing(filesystem, glob, paths):
"""
Get all the paths that do in fact exist. Returns a set of all e... |
Efficiently determines missing datetimes by filesystem listing.
The current implementation works for the common case of a task writing
output to a ``FileSystemTarget`` whose path is built using strftime with
format like '...%Y...%m...%d...%H...', without custom ``complete()`` or
``exists()``.
(Eve... |
DONT USE. Will be deleted soon. Use ``self.of``!
def of_cls(self):
"""
DONT USE. Will be deleted soon. Use ``self.of``!
"""
if isinstance(self.of, six.string_types):
warnings.warn('When using Range programatically, dont pass "of" param as string!')
return Registe... |
For consistent metrics one should consider the entire range, but
it is open (infinite) if stop or start is None.
Hence make do with metrics respective to the finite simplification.
def _emit_metrics(self, missing_datetimes, finite_start, finite_stop):
"""
For consistent metrics one sho... |
Override in subclasses to do bulk checks.
Returns a sorted list.
This is a conservative base implementation that brutally checks completeness, instance by instance.
Inadvisable as it may be slow.
def missing_datetimes(self, finite_datetimes):
"""
Override in subclasses to do ... |
Backward compatible wrapper. Will be deleted eventually (stated on Dec 2015)
def _missing_datetimes(self, finite_datetimes):
"""
Backward compatible wrapper. Will be deleted eventually (stated on Dec 2015)
"""
try:
return self.missing_datetimes(finite_datetimes)
exce... |
Given a dictionary of parameters, will extract the ranged task parameter value
def parameters_to_datetime(self, p):
"""
Given a dictionary of parameters, will extract the ranged task parameter value
"""
dt = p[self._param_name]
return datetime(dt.year, dt.month, dt.day) |
Simply returns the points in time that correspond to turn of day.
def finite_datetimes(self, finite_start, finite_stop):
"""
Simply returns the points in time that correspond to turn of day.
"""
date_start = datetime(finite_start.year, finite_start.month, finite_start.day)
dates... |
Simply returns the points in time that correspond to whole hours.
def finite_datetimes(self, finite_start, finite_stop):
"""
Simply returns the points in time that correspond to whole hours.
"""
datehour_start = datetime(finite_start.year, finite_start.month, finite_start.day, finite_st... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.