text stringlengths 81 112k |
|---|
Returns the names of all options that are required but were not specified.
All options that don't have a default value are required in order to run the
workflow.
Args:
args (dict): A dictionary of the provided arguments that is checked for
missing options.
... |
Consolidate the provided arguments.
If the provided arguments have matching options, this performs a type conversion.
For any option that has a default value and is not present in the provided
arguments, the default value is added.
Args:
args (dict): A dictionary of the pro... |
Store the task graph definition (schema).
The schema has to adhere to the following rules:
A key in the schema dict represents a parent task and the value one or more
children:
{parent: [child]} or {parent: [child1, child2]}
The data output of one task can be routed to a l... |
Run the dag by calling the tasks in the correct order.
Args:
config (Config): Reference to the configuration object from which the
settings for the dag are retrieved.
workflow_id (str): The unique ID of the workflow that runs this dag.
signal (Da... |
Validate the graph by checking whether it is a directed acyclic graph.
Args:
graph (DiGraph): Reference to a DiGraph object from NetworkX.
Raises:
DirectedAcyclicGraphInvalid: If the graph is not a valid dag.
def validate(self, graph):
""" Validate the graph by checkin... |
Construct the task graph (dag) from a given schema.
Parses the graph schema definition and creates the task graph. Tasks are the
vertices of the graph and the connections defined in the schema become the edges.
A key in the schema dict represents a parent task and the value one or more
... |
Merge the specified dataset on top of the existing data.
This replaces all values in the existing dataset with the values from the
given dataset.
Args:
dataset (TaskData): A reference to the TaskData object that should be merged
on top of the existing object.
def m... |
Add a new dataset to the MultiTaskData.
Args:
task_name (str): The name of the task from which the dataset was received.
dataset (TaskData): The dataset that should be added.
aliases (list): A list of aliases that should be registered with the dataset.
def add_dataset(self,... |
Add an alias pointing to the specified index.
Args:
alias (str): The alias that should point to the given index.
index (int): The index of the dataset for which an alias should be added.
Raises:
DataInvalidIndex: If the index does not represent a valid dataset.
def... |
Merge all datasets into a single dataset.
The default dataset is the last dataset to be merged, as it is considered to be
the primary source of information and should overwrite all existing fields with
the same key.
Args:
in_place (bool): Set to ``True`` to replace the exis... |
Set the default dataset by its alias.
After changing the default dataset, all calls without explicitly specifying the
dataset by index or alias will be redirected to this dataset.
Args:
alias (str): The alias of the dataset that should be made the default.
Raises:
... |
Set the default dataset by its index.
After changing the default dataset, all calls without explicitly specifying the
dataset by index or alias will be redirected to this dataset.
Args:
index (int): The index of the dataset that should be made the default.
Raises:
... |
Return a dataset by its alias.
Args:
alias (str): The alias of the dataset that should be returned.
Raises:
DataInvalidAlias: If the alias does not represent a valid dataset.
def get_by_alias(self, alias):
""" Return a dataset by its alias.
Args:
a... |
Return a dataset by its index.
Args:
index (int): The index of the dataset that should be returned.
Raises:
DataInvalidIndex: If the index does not represent a valid dataset.
def get_by_index(self, index):
""" Return a dataset by its index.
Args:
i... |
The main run method of the Python task.
Args:
data (:class:`.MultiTaskData`): The data object that has been passed from the
predecessor task.
store (:class:`.DataStoreDocument`): The persistent data store object that allows the
task to store data for acce... |
Return the task context content as a dictionary.
def to_dict(self):
""" Return the task context content as a dictionary. """
return {
'task_name': self.task_name,
'dag_name': self.dag_name,
'workflow_name': self.workflow_name,
'workflow_id': self.workflow... |
Start a worker process.
Args:
queues (list): List of queue names this worker accepts jobs from.
config (Config): Reference to the configuration object from which the
settings for the worker are retrieved.
name (string): Unique name for the worker. The hostname template variables... |
Stop a worker process.
Args:
config (Config): Reference to the configuration object from which the
settings for the worker are retrieved.
worker_ids (list): An optional list of ids for the worker that should be stopped.
def stop_worker(config, *, worker_ids=None):
""" Stop a worker... |
Return a list of all available workers.
Args:
config (Config): Reference to the configuration object from which the
settings are retrieved.
filter_by_queues (list): Restrict the returned workers to workers that listen to
at least one of the queue names in this list.
Ret... |
Return a new object in which callable parameters have been evaluated.
Native types are not touched and simply returned, while callable methods are
executed and their return value is returned.
Args:
data (MultiTaskData): The data object that has been passed from the
... |
Evaluate the value of a single parameter taking into account callables .
Native types are not touched and simply returned, while callable methods are
executed and their return value is returned.
Args:
key (str): The name of the parameter that should be evaluated.
data (... |
Download the latest offical list of `ARRL Logbook of the World (LOTW)`__ users.
Args:
url (str, optional): Download URL
Returns:
dict: Dictionary containing the callsign (unicode) date of the last LOTW upload (datetime)
Raises:
IOError: When network is unav... |
Download the latest offical list of `Clublog`__ users.
Args:
url (str, optional): Download URL
Returns:
dict: Dictionary containing (if data available) the fields:
firstqso, lastqso, last-lotw, lastupload (datetime),
locator (string) and oqrs (bo... |
Download the latest official list of `EQSL.cc`__ users. The list of users can be found here_.
Args:
url (str, optional): Download URL
Returns:
list: List containing the callsigns of EQSL users (unicode)
Raises:
IOError: When network is unavailable, file can... |
Copy the complete lookup data into redis. Old data will be overwritten.
Args:
redis_prefix (str): Prefix to distinguish the data in redis for the different looktypes
redis_instance (str): an Instance of Redis
Returns:
bool: returns True when the data has been copied... |
Returns lookup data of an ADIF Entity
Args:
entity (int): ADIF identifier of country
Returns:
dict: Dictionary containing the country specific data
Raises:
KeyError: No matching entity found
Example:
The following code queries the the Cl... |
Create a copy of dict and remove not needed data
def _strip_metadata(self, my_dict):
"""
Create a copy of dict and remove not needed data
"""
new_dict = copy.deepcopy(my_dict)
if const.START in new_dict:
del new_dict[const.START]
if const.END in new_dict:
... |
Returns lookup data if an exception exists for a callsign
Args:
callsign (string): Amateur radio callsign
timestamp (datetime, optional): datetime in UTC (tzinfo=pytz.UTC)
Returns:
dict: Dictionary containing the country specific data of the callsign
Raises... |
Retrieve the data of an item from redis and put it in an index and data dictionary to match the
common query interface.
def _get_dicts_from_redis(self, name, index_name, redis_prefix, item):
"""
Retrieve the data of an item from redis and put it in an index and data dictionary to match the
... |
Checks if the item is found in the index. An entry in the index points to the data
in the data_dict. This is mainly used retrieve callsigns and prefixes.
In case data is found for item, a dict containing the data is returned. Otherwise a KeyError is raised.
def _check_data_for_date(self, item, timestam... |
Checks if the callsign is marked as an invalid operation for a given timestamp.
In case the operation is invalid, True is returned. Otherwise a KeyError is raised.
def _check_inv_operation_for_date(self, item, timestamp, data_dict, data_index_dict):
"""
Checks if the callsign is marked as an in... |
Returns lookup data of a Prefix
Args:
prefix (string): Prefix of a Amateur Radio callsign
timestamp (datetime, optional): datetime in UTC (tzinfo=pytz.UTC)
Returns:
dict: Dictionary containing the country specific data of the Prefix
Raises:
KeyE... |
Returns True if an operations is known as invalid
Args:
callsign (string): Amateur Radio callsign
timestamp (datetime, optional): datetime in UTC (tzinfo=pytz.UTC)
Returns:
bool: True if a record exists for this callsign (at the given time)
Raises:
... |
Checks the index and data if a cq-zone exception exists for the callsign
When a zone exception is found, the zone is returned. If no exception is found
a KeyError is raised
def _check_zone_exception_for_date(self, item, timestamp, data_dict, data_index_dict):
"""
Checks the index and da... |
Returns a CQ Zone if an exception exists for the given callsign
Args:
callsign (string): Amateur radio callsign
timestamp (datetime, optional): datetime in UTC (tzinfo=pytz.UTC)
Returns:
int: Value of the the CQ Zone exception which exists for this callsign (at the given ti... |
Set up the Lookup object for Clublog Online API
def _lookup_clublogAPI(self, callsign=None, timestamp=timestamp_now, url="https://secure.clublog.org/dxcc", apikey=None):
""" Set up the Lookup object for Clublog Online API
"""
params = {"year" : timestamp.strftime("%Y"),
"month" : t... |
Performs the dxcc lookup against the QRZ.com XML API:
def _lookup_qrz_dxcc(self, dxcc_or_callsign, apikey, apiv="1.3.3"):
""" Performs the dxcc lookup against the QRZ.com XML API:
"""
response = self._request_dxcc_info_from_qrz(dxcc_or_callsign, apikey, apiv=apiv)
root = BeautifulSoup... |
Performs the callsign lookup against the QRZ.com XML API:
def _lookup_qrz_callsign(self, callsign=None, apikey=None, apiv="1.3.3"):
""" Performs the callsign lookup against the QRZ.com XML API:
"""
if apikey is None:
raise AttributeError("Session Key Missing")
callsign = c... |
Load and process the ClublogXML file either as a download or from file
def _load_clublogXML(self,
url="https://secure.clublog.org/cty.php",
apikey=None,
cty_file=None):
""" Load and process the ClublogXML file either as a download or from ... |
Load and process the ClublogXML file either as a download or from file
def _load_countryfile(self,
url="https://www.country-files.com/cty/cty.plist",
country_mapping_filename="countryfilemapping.json",
cty_file=None):
""" Load and proce... |
Download lookup files either from Clublog or Country-files.com
def _download_file(self, url, apikey=None):
""" Download lookup files either from Clublog or Country-files.com
"""
import gzip
import tempfile
cty = {}
cty_date = ""
cty_file_path = None
fil... |
Extract the header of the Clublog XML File
def _extract_clublog_header(self, cty_xml_filename):
"""
Extract the header of the Clublog XML File
"""
cty_header = {}
try:
with open(cty_xml_filename, "r") as cty:
raw_header = cty.readline()
... |
remove the header of the Clublog XML File to make it
properly parseable for the python ElementTree XML parser
def _remove_clublog_xml_header(self, cty_xml_filename):
"""
remove the header of the Clublog XML File to make it
properly parseable for the python ElementTree XML pa... |
parse the content of a clublog XML file and return the
parsed values in dictionaries
def _parse_clublog_xml(self, cty_xml_filename):
"""
parse the content of a clublog XML file and return the
parsed values in dictionaries
"""
entities = {}
call_exceptions = {}
... |
Parse the content of a PLIST file from country-files.com return the
parsed values in dictionaries.
Country-files.com provides Prefixes and Exceptions
def _parse_country_file(self, cty_file, country_mapping_filename=None):
"""
Parse the content of a PLIST file from country-files.com retu... |
Generates a random word
def _generate_random_word(self, length):
"""
Generates a random word
"""
return ''.join(random.choice(string.ascii_lowercase) for _ in range(length)) |
Checks if the API Key is valid and if the request returned a 200 status (ok)
def _check_html_response(self, response):
"""
Checks if the API Key is valid and if the request returned a 200 status (ok)
"""
error1 = "Access to this form requires a valid API key. For more info see: htt... |
Serialize a Dictionary into JSON
def _serialize_data(self, my_dict):
"""
Serialize a Dictionary into JSON
"""
new_dict = {}
for item in my_dict:
if isinstance(my_dict[item], datetime):
new_dict[item] = my_dict[item].strftime('%Y-%m-%d%H:%M:%S')
... |
Deserialize a JSON into a dictionary
def _deserialize_data(self, json_data):
"""
Deserialize a JSON into a dictionary
"""
my_dict = json.loads(json_data.decode('utf8').replace("'", '"'),
encoding='UTF-8')
for item in my_dict:
if item == const.ADIF:
... |
Return the names of all callable attributes of an object
def get_methods(*objs):
""" Return the names of all callable attributes of an object"""
return set(
attr
for obj in objs
for attr in dir(obj)
if not attr.startswith('_') and callable(getattr(obj, attr))
) |
Create a new Config object from a configuration file.
Args:
filename (str): The location and name of the configuration file.
strict (bool): If true raises a ConfigLoadError when the configuration
cannot be found.
Returns:
An instance of the Config cl... |
Load the configuration from a file.
The location of the configuration file can either be specified directly in the
parameter filename or is searched for in the following order:
1. In the environment variable given by LIGHTFLOW_CONFIG_ENV
2. In the current execution directory
... |
Load the configuration from a dictionary.
Args:
conf_dict (dict): Dictionary with the configuration.
def load_from_dict(self, conf_dict=None):
""" Load the configuration from a dictionary.
Args:
conf_dict (dict): Dictionary with the configuration.
"""
s... |
Helper method to update an existing configuration with the values from a file.
Loads a configuration file and replaces all values in the existing configuration
dictionary with the values from the file.
Args:
filename (str): The path and name to the configuration file.
def _update_... |
Recursively merges the fields for two dictionaries.
Args:
to_dict (dict): The dictionary onto which the merge is executed.
from_dict (dict): The dictionary merged into to_dict
def _update_dict(self, to_dict, from_dict):
""" Recursively merges the fields for two dictionaries.
... |
Append the workflow and libraries paths to the PYTHONPATH.
def _update_python_paths(self):
""" Append the workflow and libraries paths to the PYTHONPATH. """
for path in self._config['workflows'] + self._config['libraries']:
if os.path.isdir(os.path.abspath(path)):
if path n... |
Chop Line from DX-Cluster into pieces and return a dict with the spot data
def decode_char_spot(raw_string):
"""Chop Line from DX-Cluster into pieces and return a dict with the spot data"""
data = {}
# Spotter callsign
if re.match('[A-Za-z0-9\/]+[:$]', raw_string[6:15]):
data[const.SPOTTER] =... |
Decode PC11 message, which usually contains DX Spots
def decode_pc11_message(raw_string):
"""Decode PC11 message, which usually contains DX Spots"""
data = {}
spot = raw_string.split("^")
data[const.FREQUENCY] = float(spot[1])
data[const.DX] = spot[2]
data[const.TIME] = datetime.fromtimestamp(... |
Decode PC23 Message which usually contains WCY
def decode_pc23_message(raw_string):
""" Decode PC23 Message which usually contains WCY """
data = {}
wcy = raw_string.split("^")
data[const.R] = int(wcy[1])
data[const.expk] = int(wcy[2])
data[const.CALLSIGN] = wcy[3]
data[const.A] = wcy[4]
... |
The internal run method that decorates the public run method.
This method makes sure data is being passed to and from the task.
Args:
data (MultiTaskData): The data object that has been passed from the
predecessor task.
store (DataStoreDocument... |
converts WGS84 coordinates into the corresponding Maidenhead Locator
Args:
latitude (float): Latitude
longitude (float): Longitude
Returns:
string: Maidenhead locator
Raises:
ValueError: When called with wrong or invalid input args
T... |
converts Maidenhead locator in the corresponding WGS84 coordinates
Args:
locator (string): Locator, either 4 or 6 characters
Returns:
tuple (float, float): Latitude, Longitude
Raises:
ValueError: When called with wrong or invalid input arg
TypeE... |
calculates the (shortpath) distance between two Maidenhead locators
Args:
locator1 (string): Locator, either 4 or 6 characters
locator2 (string): Locator, either 4 or 6 characters
Returns:
float: Distance in km
Raises:
ValueError: When called wi... |
calculates the (longpath) distance between two Maidenhead locators
Args:
locator1 (string): Locator, either 4 or 6 characters
locator2 (string): Locator, either 4 or 6 characters
Returns:
float: Distance in km
Raises:
ValueError: When called wit... |
calculates the heading from the first to the second locator
Args:
locator1 (string): Locator, either 4 or 6 characters
locator2 (string): Locator, either 4 or 6 characters
Returns:
float: Heading in deg
Raises:
ValueError: When called with wrong... |
calculates the heading from the first to the second locator (long path)
Args:
locator1 (string): Locator, either 4 or 6 characters
locator2 (string): Locator, either 4 or 6 characters
Returns:
float: Long path heading in deg
Raises:
ValueError: ... |
calculates the next sunset and sunrise for a Maidenhead locator at a give date & time
Args:
locator1 (string): Maidenhead Locator, either 4 or 6 characters
calc_date (datetime, optional): Starting datetime for the calculations (UTC)
Returns:
dict: Containing datetim... |
Encode Python objects into a byte stream using cloudpickle.
def cloudpickle_dumps(obj, dumper=cloudpickle.dumps):
""" Encode Python objects into a byte stream using cloudpickle. """
return dumper(obj, protocol=serialization.pickle_protocol) |
Monkey patch Celery to use cloudpickle instead of pickle.
def patch_celery():
""" Monkey patch Celery to use cloudpickle instead of pickle. """
registry = serialization.registry
serialization.pickle = cloudpickle
registry.unregister('pickle')
registry.register('pickle', cloudpickle_dumps, cloudpick... |
Connects to the redis database.
def connect(self):
""" Connects to the redis database. """
self._connection = StrictRedis(
host=self._host,
port=self._port,
db=self._database,
password=self._password) |
Returns a single request.
Takes the first request from the list of requests and returns it. If the list
is empty, None is returned.
Returns:
Response: If a new request is available a Request object is returned,
otherwise None is returned.
def receive(self):
... |
Send a response back to the client that issued a request.
Args:
response (Response): Reference to the response object that should be sent.
def send(self, response):
""" Send a response back to the client that issued a request.
Args:
response (Response): Reference to th... |
Push the request back onto the queue.
Args:
request (Request): Reference to a request object that should be pushed back
onto the request queue.
def restore(self, request):
""" Push the request back onto the queue.
Args:
request (Request):... |
Send a request to the server and wait for its response.
Args:
request (Request): Reference to a request object that is sent to the server.
Returns:
Response: The response from the server to the request.
def send(self, request):
""" Send a request to the server and wait... |
Verifies if pattern for matching and finding fulfill expected structure.
:param pattern: string pattern to verify
:return: True if pattern has proper syntax, False otherwise
def verify_pattern(pattern):
"""Verifies if pattern for matching and finding fulfill expected structure.
:param pa... |
Prints sentences tree as string using token_attr from token(like pos_, tag_ etc.)
:param sent: sentence to print
:param token_attr: choosen attr to present for tokens(e.g. dep_, pos_, tag_, ...)
def print_tree(sent, token_attr):
"""Prints sentences tree as string using token_attr from token(like p... |
Matches given sentence with provided pattern.
:param sentence: sentence from Spacy(see: http://spacy.io/docs/#doc-spans-sents) representing complete statement
:param pattern: pattern to which sentence will be compared
:return: True if sentence match to pattern, False otherwise
:raises... |
Find all tokens from parts of sentence fitted to pattern, being on the end of matched sub-tree(of sentence)
:param sentence: sentence from Spacy(see: http://spacy.io/docs/#doc-spans-sents) representing complete statement
:param pattern: pattern to which sentence will be compared
:return: Spacy... |
Split a string using a single-character delimter
@params:
`s`: the string
`delimter`: the single-character delimter
`trim`: whether to trim each part. Default: True
@examples:
```python
ret = split("'a,b',c", ",")
# ret == ["'a,b'", "c"]
# ',' inside quotes will be recognized.
```
@returns... |
Render this template by applying it to `context`.
@params:
`context`: a dictionary of values to use in this rendering.
@returns:
The rendered string
def render(self, **context):
"""
Render this template by applying it to `context`.
@params:
`context`: a dictionary of values to use in this rendering.... |
Add a line of source to the code.
Indentation and newline will be added for you, don't provide them.
@params:
`line`: The line to add
def addLine(self, line):
"""
Add a line of source to the code.
Indentation and newline will be added for you, don't provide them.
@params:
`line`: The line to add
""... |
Get or set the logging level.
def level(self, lvl=None):
'''Get or set the logging level.'''
if not lvl:
return self._lvl
self._lvl = self._parse_level(lvl)
self.stream.setLevel(self._lvl)
logging.root.setLevel(self._lvl) |
Fetch currency conversion rate from the database
def get_rate_from_db(currency: str) -> Decimal:
"""
Fetch currency conversion rate from the database
"""
from .models import ConversionRate
try:
rate = ConversionRate.objects.get_rate(currency)
except ConversionRate.DoesNotExist: # noqa
... |
Get conversion rate to use in exchange
def get_conversion_rate(from_currency: str, to_currency: str) -> Decimal:
"""
Get conversion rate to use in exchange
"""
reverse_rate = False
if to_currency == BASE_CURRENCY:
# Fetch exchange rate for base currency and use 1 / rate for conversion
... |
Exchanges Money, TaxedMoney and their ranges to the specified currency.
get_rate parameter is a callable taking single argument (target currency)
that returns proper conversion rate
def exchange_currency(
base: T, to_currency: str, *, conversion_rate: Decimal=None) -> T:
"""
Exchanges Money, Ta... |
Normalize *location_name* by stripping punctuation and collapsing
runs of whitespace, and return the normalized name.
def normalize(location_name, preserve_commas=False):
"""Normalize *location_name* by stripping punctuation and collapsing
runs of whitespace, and return the normalized name."""
def repl... |
Calculates precipitation statistics for the cascade model while aggregating hourly observations
Parameters
----------
months : Months for each seasons to be used for statistics (array of numpy array, default=1-12, e.g., [np.arange(12) + 1])
avg_stats : average statistics for ... |
Calculates statistics in order to derive diurnal patterns of wind speed
def calc_wind_stats(self):
"""
Calculates statistics in order to derive diurnal patterns of wind speed
"""
a, b, t_shift = melodist.fit_cosine_function(self.data.wind)
self.wind.update(a=a, b=b, t_shift=t_sh... |
Calculates statistics in order to derive diurnal patterns of relative humidity.
def calc_humidity_stats(self):
"""
Calculates statistics in order to derive diurnal patterns of relative humidity.
"""
a1, a0 = melodist.calculate_dewpoint_regression(self.data, return_stats=False)
s... |
Calculates statistics in order to derive diurnal patterns of temperature
def calc_temperature_stats(self):
"""
Calculates statistics in order to derive diurnal patterns of temperature
"""
self.temp.max_delta = melodist.get_shift_by_data(self.data.temp, self._lon, self._lat, self._timezo... |
Calculates statistics in order to derive solar radiation from sunshine duration or
minimum/maximum temperature.
Parameters
----------
data_daily : DataFrame, optional
Daily data from the associated ``Station`` object.
day_length : Series, optional
Day le... |
Exports statistical data to a JSON formatted file
Parameters
----------
filename: output file that holds statistics data
def to_json(self, filename=None):
"""
Exports statistical data to a JSON formatted file
Parameters
----------
filename: output... |
Imports statistical data from a JSON formatted file
Parameters
----------
filename: input file that holds statistics data
def from_json(cls, filename):
"""
Imports statistical data from a JSON formatted file
Parameters
----------
filename: input f... |
general function for radiation disaggregation
Args:
daily_data: daily values
sun_times: daily dataframe including results of the util.sun_times function
pot_rad: hourly dataframe including potential radiation
method: keyword specifying the disaggregation method to be used
... |
Calculate potential shortwave radiation for a specific location and time.
This routine calculates global radiation as described in:
Liston, G. E. and Elder, K. (2006): A Meteorological Distribution System for
High-Resolution Terrestrial Modeling (MicroMet), J. Hydrometeorol., 7, 217–234.
Correct... |
calculates potential shortwave radiation based on minimum and maximum temperature
This routine calculates global radiation as described in:
Bristow, Keith L., and Gaylon S. Campbell: On the relationship between
incoming solar radiation and daily maximum and minimum temperature.
Agricultural and fo... |
Fit the A and C parameters for the Bristow & Campbell (1984) model using observed daily
minimum and maximum temperature and mean daily (e.g. aggregated from hourly values) solar
radiation.
Parameters
----------
tmin : Series
Observed daily minimum temperature.
tmax : Series
... |
Calculate mean daily radiation from observed sunshine duration according to Angstroem (1924).
Parameters
----------
ssd : Series
Observed daily sunshine duration.
day_length : Series
Day lengths as calculated by ``calc_sun_times``.
pot_rad_daily : Series
Mean po... |
Fit the a and b parameters for the Angstroem (1924) model using observed daily
sunshine duration and mean daily (e.g. aggregated from hourly values) solar
radiation.
Parameters
----------
ssd : Series
Observed daily sunshine duration.
day_length : Series
Day lengths a... |
Return a decorator that registers the decorated class as a
resolver with the given *name*.
def register(name):
"""Return a decorator that registers the decorated class as a
resolver with the given *name*."""
def decorator(class_):
if name in known_resolvers:
raise ValueError('duplic... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.