text stringlengths 81 112k |
|---|
Plot a small (mini) histogram of the data.
Parameters
----------
series: Series
The data to plot.
Returns
-------
str
The resulting image encoded as a string.
def mini_histogram(series, **kwargs):
"""Plot a small (mini) histogram of the data.
Parameters
----------... |
Plot image of a matrix correlation.
Parameters
----------
corrdf: DataFrame
The matrix correlation to plot.
title: str
The matrix title
Returns
-------
str, The resulting image encoded as a string.
def correlation_matrix(corrdf, title, **kwargs):
"""Plot image of a matri... |
Calculate value counts and distinct count of a variable (technically a Series).
The result is cached by column name in a global variable to avoid recomputing.
Parameters
----------
data : Series
The data type of the Series.
Returns
-------
list
value count and distinct cou... |
Infer the type of a variable (technically a Series).
The types supported are split in standard types and special types.
Standard types:
* Categorical (`TYPE_CAT`): the default type if no other one can be determined
* Numerical (`TYPE_NUM`): if it contains numbers
* Boolean (`TYPE_BOOL`... |
Factory func for filters.
data - policy config for filters
manager - resource type manager (ec2, s3, etc)
def factory(self, data, manager=None):
"""Factory func for filters.
data - policy config for filters
manager - resource type manager (ec2, s3, etc)
"""
# ... |
Determine the immediate parent boolean operator for a filter
def get_block_operator(self):
"""Determine the immediate parent boolean operator for a filter"""
# Top level operator is `and`
block_stack = ['and']
for f in self.manager.iter_filters(block_end=True):
if f is None:... |
Specific validation for `resource_count` type
The `resource_count` type works a little differently because it operates
on the entire set of resources. It:
- does not require `key`
- `value` must be a number
- supports a subset of the OPERATORS list
def _validate_resource... |
Given an inventory csv file, return an iterator over keys
def load_manifest_file(client, bucket, schema, versioned, ifilters, key_info):
"""Given an inventory csv file, return an iterator over keys
"""
# To avoid thundering herd downloads, we do an immediate yield for
# interspersed i/o
yield None
... |
Given an inventory location for a bucket, return an iterator over keys
on the most recent delivered manifest.
def load_bucket_inventory(
client, inventory_bucket, inventory_prefix, versioned, ifilters):
"""Given an inventory location for a bucket, return an iterator over keys
on the most recent d... |
Generator to generate a set of keys from
from a set of generators, each generator is selected
at random and consumed to exhaustion.
def random_chain(generators):
"""Generator to generate a set of keys from
from a set of generators, each generator is selected
at random and consumed to exhaustion.
... |
Check a bucket for a named inventory, and return the destination.
def get_bucket_inventory(client, bucket, inventory_id):
"""Check a bucket for a named inventory, and return the destination."""
inventories = client.list_bucket_inventory_configurations(
Bucket=bucket).get('InventoryConfigurationList', [... |
You can customize the automated documentation by altering
the code directly in this script or the associated jinja2 template
def create_html_file(config):
""" You can customize the automated documentation by altering
the code directly in this script or the associated jinja2 template
"""
log... |
Update this function to help build the link to your file
def get_file_url(path, config):
""" Update this function to help build the link to your file
"""
file_url_regex = re.compile(config['file_url_regex'])
new_path = re.sub(file_url_regex, config['file_url_base'], path)
return new_path |
Gather policy information from files
def gather_file_data(config):
""" Gather policy information from files
"""
file_regex = re.compile(config['file_regex'])
category_regex = re.compile(config['category_regex'])
policies = {}
for root, dirs, files in os.walk(config['c7n_policy_directory']):
... |
Upload html file to S3
def upload_to_s3(file_path, config):
""" Upload html file to S3
"""
logging.info("Uploading file to S3 bucket: %s", config['s3_bucket_name'])
s3 = boto3.resource('s3')
s3_filename = config['s3_bucket_path'] + config['rendered_filename']
s3.Bucket(config['s3_bucket_name'])... |
Return all github repositories in an organization.
def github_repos(organization, github_url, github_token):
"""Return all github repositories in an organization."""
# Get github repos
headers = {"Authorization": "token {}".format(github_token)}
next_cursor = None
while next_cursor is not False:
... |
Stream changes for repos in a GitHub organization.
def org_stream(ctx, organization, github_url, github_token, clone_dir,
verbose, filter, exclude, stream_uri, assume):
"""Stream changes for repos in a GitHub organization.
"""
logging.basicConfig(
format="%(asctime)s: %(name)s:%(leve... |
Checkout repositories from a GitHub organization.
def org_checkout(organization, github_url, github_token, clone_dir,
verbose, filter, exclude):
"""Checkout repositories from a GitHub organization."""
logging.basicConfig(
format="%(asctime)s: %(name)s:%(levelname)s %(message)s",
... |
Policy diff between two arbitrary revisions.
Revision specifiers for source and target can use fancy git refspec syntax
for symbolics, dates, etc.
See: https://git-scm.com/book/en/v2/Git-Tools-Revision-Selection
Default revision selection is dependent on current working tree
branch. The intent is... |
Stream git history policy changes to destination.
Default stream destination is a summary of the policy changes to stdout, one
per line. Also supported for stdout streaming is `jsonline`.
AWS Kinesis and SQS destinations are specified by providing the ARN.
Database destinations are supported by prov... |
return the named subset of policies
def select(self, names):
"""return the named subset of policies"""
return PolicyCollection(
[p for p in self.policies if p.name in names], self.options) |
Show policies changes between arbitrary commits.
The common use form is comparing the heads of two branches.
def delta_commits(self, baseline, target):
"""Show policies changes between arbitrary commits.
The common use form is comparing the heads of two branches.
"""
baseline_... |
Return an iterator of policy changes along a commit lineage in a repo.
def delta_stream(self, target='HEAD', limit=None,
sort=pygit2.GIT_SORT_TIME | pygit2.GIT_SORT_REVERSE,
after=None, before=None):
"""Return an iterator of policy changes along a commit lineage in a r... |
Bookkeeping on internal data structures while iterating a stream.
def _process_stream_delta(self, delta_stream):
"""Bookkeeping on internal data structures while iterating a stream."""
for pchange in delta_stream:
if pchange.kind == ChangeType.ADD:
self.policy_files.setdefau... |
send the given policy change
def send(self, change):
"""send the given policy change"""
self.buf.append(change)
if len(self.buf) % self.BUF_SIZE == 0:
self.flush() |
flush any buffered messages
def flush(self):
"""flush any buffered messages"""
buf = self.buf
self.buf = []
if buf:
self._flush(buf) |
Download firehose archive, aggregate records in memory and write back.
def process_firehose_archive(bucket, key):
"""Download firehose archive, aggregate records in memory and write back."""
data = {}
with tempfile.NamedTemporaryFile(mode='w+b') as fh:
s3.download_file(bucket, key, fh.name)
... |
Split up a firehose s3 object into records
Firehose cloudwatch log delivery of flow logs does not delimit
record boundaries. We have to use knowledge of content to split
the records on boundaries. In the context of flow logs we're
dealing with delimited records.
def records_iter(fh, buffer_size=1024 *... |
Get an active session in the target account.
def get_session(self, account_id):
"""Get an active session in the target account."""
if account_id not in self.account_sessions:
if account_id not in self.config['accounts']:
raise AccountNotFound("account:%s is unknown" % accoun... |
Scope a schema error to its policy name and resource.
def policy_error_scope(error, data):
"""Scope a schema error to its policy name and resource."""
err_path = list(error.absolute_path)
if err_path[0] != 'policies':
return error
pdata = data['policies'][err_path[1]]
pdata.get('name', 'unk... |
Try to find the best error for humans to resolve
The jsonschema.exceptions.best_match error is based purely on a
mix of a strong match (ie. not anyOf, oneOf) and schema depth,
this often yields odd results that are semantically confusing,
instead we can use a bit of structural knowledge of schema to
... |
get a resource manager or a given resource type.
assumes the query is for the same underlying cloud provider.
def get_resource_manager(self, resource_type, data=None):
"""get a resource manager or a given resource type.
assumes the query is for the same underlying cloud provider.
"""
... |
Augment ElasticBeanstalk Environments with their tags.
def _eb_env_tags(envs, session_factory, retry):
"""Augment ElasticBeanstalk Environments with their tags."""
client = local_session(session_factory).client('elasticbeanstalk')
def process_tags(eb_env):
try:
eb_env['Tags'] = retry(... |
Assemble a document representing all the config state around a bucket.
TODO: Refactor this, the logic here feels quite muddled.
def assemble_bucket(item):
"""Assemble a document representing all the config state around a bucket.
TODO: Refactor this, the logic here feels quite muddled.
"""
factory... |
Tries to get the bucket region from Location.LocationConstraint
Special cases:
LocationConstraint EU defaults to eu-west-1
LocationConstraint null defaults to us-east-1
Args:
b (object): A bucket object
Returns:
string: an aws region string
def get_region(b):
"""Tries... |
Expand each list of cidr, prefix list, user id group pair
by port/protocol as an individual rule.
The console ux automatically expands them out as addition/removal is
per this expansion, the describe calls automatically group them.
def expand_permissions(self, permissions):
"""Expand e... |
Format a policy's extant records into a report.
def report(policies, start_date, options, output_fh, raw_output_fh=None):
"""Format a policy's extant records into a report."""
regions = set([p.options.region for p in policies])
policy_names = set([p.name for p in policies])
formatter = Formatter(
... |
Retrieve all s3 records for the given policy output url
From the given start date.
def record_set(session_factory, bucket, key_prefix, start_date, specify_hour=False):
"""Retrieve all s3 records for the given policy output url
From the given start date.
"""
s3 = local_session(session_factory).cl... |
Only the first record for each id
def uniq_by_id(self, records):
"""Only the first record for each id"""
uniq = []
keys = set()
for rec in records:
rec_id = rec[self._id_field]
if rec_id not in keys:
uniq.append(rec)
keys.add(rec_i... |
scan org repo status hooks
def run(organization, hook_context, github_url, github_token,
verbose, metrics=False, since=None, assume=None, region=None):
"""scan org repo status hooks"""
logging.basicConfig(level=logging.DEBUG)
since = dateparser.parse(
since, settings={
'RETURN_... |
expand any variables in the action to_from/cc_from fields.
def expand_variables(self, message):
"""expand any variables in the action to_from/cc_from fields.
"""
p = copy.deepcopy(self.data)
if 'to_from' in self.data:
to_from = self.data['to_from'].copy()
to_from... |
Resources preparation for transport.
If we have sensitive or overly large resource metadata we want to
remove or additional serialization we need to perform, this
provides a mechanism.
TODO: consider alternative implementations, at min look at adding
provider as additional disc... |
validate config file
def validate(config):
"""validate config file"""
with open(config) as fh:
content = fh.read()
try:
data = yaml.safe_load(content)
except Exception:
log.error("config file: %s is not valid yaml", config)
raise
try:
jsonschema.validate(da... |
subscribe accounts log groups to target account log group destination
def subscribe(config, accounts, region, merge, debug):
"""subscribe accounts log groups to target account log group destination"""
config = validate.callback(config)
subscription = config.get('subscription')
if subscription is None:... |
run export across accounts and log groups specified in config.
def run(config, start, end, accounts, region, debug):
"""run export across accounts and log groups specified in config."""
config = validate.callback(config)
destination = config.get('destination')
start = start and parse(start) or start
... |
simple decorator that will auto fan out async style in lambda.
outside of lambda, this will invoke synchrously.
def lambdafan(func):
"""simple decorator that will auto fan out async style in lambda.
outside of lambda, this will invoke synchrously.
"""
if 'AWS_LAMBDA_FUNCTION_NAME' not in os.envir... |
Filter log groups by shell patterns.
def filter_group_names(groups, patterns):
"""Filter log groups by shell patterns.
"""
group_names = [g['logGroupName'] for g in groups]
matched = set()
for p in patterns:
matched.update(fnmatch.filter(group_names, p))
return [g for g in groups if g['... |
Filter log groups by their creation date.
Also sets group specific value for start to the minimum
of creation date or start.
def filter_creation_date(groups, start, end):
"""Filter log groups by their creation date.
Also sets group specific value for start to the minimum
of creation date or start... |
Filter log groups where the last write was before the start date.
def filter_last_write(client, groups, start):
"""Filter log groups where the last write was before the start date.
"""
retry = get_retry(('ThrottlingException',))
def process_group(group_set):
matched = []
for g in group... |
Filter days where the bucket already has extant export keys.
def filter_extant_exports(client, bucket, prefix, days, start, end=None):
"""Filter days where the bucket already has extant export keys.
"""
end = end or datetime.now()
# days = [start + timedelta(i) for i in range((end-start).days)]
try... |
Check iam permissions for log export access in each account
def access(config, region, accounts=()):
"""Check iam permissions for log export access in each account"""
config = validate.callback(config)
accounts_report = []
def check_access(account):
accounts_report.append(account)
sess... |
size of exported records for a given day.
def size(config, accounts=(), day=None, group=None, human=True, region=None):
"""size of exported records for a given day."""
config = validate.callback(config)
destination = config.get('destination')
client = boto3.Session().client('s3')
day = parse(day)
... |
sync last recorded export to actual
Use --dryrun to check status.
def sync(config, group, accounts=(), dryrun=False, region=None):
"""sync last recorded export to actual
Use --dryrun to check status.
"""
config = validate.callback(config)
destination = config.get('destination')
client = b... |
report current export state status
def status(config, group, accounts=(), region=None):
"""report current export state status"""
config = validate.callback(config)
destination = config.get('destination')
client = boto3.Session().client('s3')
for account in config.get('accounts', ()):
if ac... |
Find exports for a given account
def get_exports(client, bucket, prefix, latest=True):
"""Find exports for a given account
"""
keys = client.list_objects_v2(
Bucket=bucket, Prefix=prefix, Delimiter='/').get('CommonPrefixes', [])
found = []
years = []
for y in keys:
part = y['Pre... |
export a given log group to s3
def export(group, bucket, prefix, start, end, role, poll_period=120,
session=None, name="", region=None):
"""export a given log group to s3"""
start = start and isinstance(start, six.string_types) and parse(start) or start
end = (end and isinstance(start, six.strin... |
Lambda globals cache.
def init():
""" Lambda globals cache.
"""
global sqs, logs, config
if config is None:
with open('config.json') as fh:
config = json.load(fh)
if sqs is None:
sqs = boto3.client('sqs')
if logs is None:
logs = boto3.client('logs') |
Lambda Entrypoint - Log Subscriber
Format log events and relay to sentry (direct or sqs)
def process_log_event(event, context):
"""Lambda Entrypoint - Log Subscriber
Format log events and relay to sentry (direct or sqs)
"""
init()
# Grab the actual error log payload
serialized = event['aw... |
CLI - Replay / Index
def process_log_group(config):
"""CLI - Replay / Index
"""
from c7n.credentials import SessionFactory
factory = SessionFactory(
config.region, config.profile, assume_role=config.role)
session = factory()
client = session.client('logs')
params = dict(logGroupNa... |
Extract a sentry traceback structure,
From a python formatted traceback string per python stdlib
traceback.print_exc()
def parse_traceback(msg, site_path="site-packages", in_app_prefix="c7n"):
"""Extract a sentry traceback structure,
From a python formatted traceback string per python stdlib
trac... |
Lambda function provisioning.
Self contained within the component, to allow for easier reuse.
def get_function(session_factory, name, handler, runtime, role,
log_groups,
project, account_name, account_id,
sentry_dsn,
pattern="Traceback"):
"""... |
Break an iterable into lists of size
def chunks(iterable, size=50):
"""Break an iterable into lists of size"""
batch = []
for n in iterable:
batch.append(n)
if len(batch) % size == 0:
yield batch
batch = []
if batch:
yield batch |
daemon queue worker for config notifications
def worker_config(queue, s3_key, period, verbose):
"""daemon queue worker for config notifications"""
logging.basicConfig(level=(verbose and logging.DEBUG or logging.INFO))
logging.getLogger('botocore').setLevel(logging.WARNING)
logging.getLogger('s3transfer... |
Analyze flow log records for application and generate metrics per period
def list_app_resources(
app, env, resources, cmdb, start, end, tz):
"""Analyze flow log records for application and generate metrics per period"""
logging.basicConfig(level=logging.INFO)
start, end = get_dates(start, end, tz)
... |
load resources into resource database.
def load_resources(bucket, prefix, region, account_config, accounts,
assume, start, end, resources, store, db, verbose, debug):
"""load resources into resource database."""
logging.basicConfig(level=(verbose and logging.DEBUG or logging.INFO))
loggi... |
Load external plugins.
Custodian is intended to interact with internal and external systems
that are not suitable for embedding into the custodian code base.
def load_plugins(self):
""" Load external plugins.
Custodian is intended to interact with internal and external systems
... |
Submit a function for serialized execution on sqs
def submit(self, func, *args, **kwargs):
"""Submit a function for serialized execution on sqs
"""
self.op_sequence += 1
self.sqs.send_message(
QueueUrl=self.map_queue,
MessageBody=utils.dumps({'args': args, 'kwarg... |
Fetch results from separate queue
def gather(self):
"""Fetch results from separate queue
"""
limit = self.op_sequence - self.op_sequence_start
results = MessageIterator(self.sqs, self.reduce_queue, limit)
for m in results:
# sequence_id from above
msg_id ... |
normalize tag format on ecs resources to match common aws format.
def ecs_tag_normalize(resources):
"""normalize tag format on ecs resources to match common aws format."""
for r in resources:
if 'tags' in r:
r['Tags'] = [{'Key': t['key'], 'Value': t['value']} for t in r['tags']]
... |
Retrieve ecs resources for serverless policies or related resources
Requires arns in new format.
https://docs.aws.amazon.com/AmazonECS/latest/userguide/ecs-resource-ids.html
def get_resources(self, ids, cache=True):
"""Retrieve ecs resources for serverless policies or related resources
... |
resource types used by the collection.
def resource_types(self):
"""resource types used by the collection."""
rtypes = set()
for p in self.policies:
rtypes.add(p.resource_type)
return rtypes |
Retrieve any associated metrics for the policy.
def get_metrics(self, start, end, period):
"""Retrieve any associated metrics for the policy."""
values = {}
default_dimensions = {
'Policy': self.policy.name, 'ResType': self.policy.resource_type,
'Scope': 'Policy'}
... |
Run policy in push mode against given event.
Lambda automatically generates cloud watch logs, and metrics
for us, albeit with some deficienies, metrics no longer count
against valid resources matches, but against execution.
If metrics execution option is enabled, custodian will generat... |
Get runtime variables for policy interpolation.
Runtime variables are merged with the passed in variables
if any.
def get_variables(self, variables=None):
"""Get runtime variables for policy interpolation.
Runtime variables are merged with the passed in variables
if any.
... |
Expand variables in policy data.
Updates the policy data in-place.
def expand_variables(self, variables):
"""Expand variables in policy data.
Updates the policy data in-place.
"""
# format string values returns a copy
updated = utils.format_string_values(self.data, **v... |
get permissions needed by this policy
def get_permissions(self):
"""get permissions needed by this policy"""
permissions = set()
permissions.update(self.resource_manager.get_permissions())
for f in self.resource_manager.filters:
permissions.update(f.get_permissions())
... |
Handle various client side errors when describing snapshots
def extract_bad_snapshot(e):
"""Handle various client side errors when describing snapshots"""
msg = e.response['Error']['Message']
error = e.response['Error']['Code']
e_snap_id = None
if error == 'InvalidSnapshot.NotFo... |
STS Role assume a boto3.Session
With automatic credential renewal.
Args:
role_arn: iam role arn to assume
session_name: client session identifier
session: an optional extant session, note session is captured
in a function closure for renewing the sts assumed role.
:return: a boto3... |
Does the resource tag schedule and policy match the current time.
def process_resource_schedule(self, i, value, time_type):
"""Does the resource tag schedule and policy match the current time."""
rid = i[self.id_key]
# this is to normalize trailing semicolons which when done allows
# da... |
Get the resource's tag value specifying its schedule.
def get_tag_value(self, i):
"""Get the resource's tag value specifying its schedule."""
# Look for the tag, Normalize tag key and tag value
found = False
for t in i.get('Tags', ()):
if t['Key'].lower() == self.tag_key:
... |
convert the tag to a dictionary, taking values as is
This method name and purpose are opaque... and not true.
def raw_data(tag_value):
"""convert the tag to a dictionary, taking values as is
This method name and purpose are opaque... and not true.
"""
data = {}
piece... |
test that provided tag keys are valid
def keys_are_valid(self, tag_value):
"""test that provided tag keys are valid"""
for key in ScheduleParser.raw_data(tag_value):
if key not in ('on', 'off', 'tz'):
return False
return True |
Garbage collect old custodian policies based on prefix.
We attempt to introspect to find the event sources for a policy
but without the old configuration this is implicit.
def resources_gc_prefix(options, policy_config, policy_collection):
"""Garbage collect old custodian policies based on prefix.
We... |
Get a boto3 sesssion potentially cross account sts assumed
assumed sessions are automatically refreshed.
def get_session(account_info):
"""Get a boto3 sesssion potentially cross account sts assumed
assumed sessions are automatically refreshed.
"""
s = getattr(CONN_CACHE, '%s-session' % account_in... |
Bulk invoke a function via queues
Uses internal implementation details of rq.
def bulk_invoke(func, args, nargs):
"""Bulk invoke a function via queues
Uses internal implementation details of rq.
"""
# for comparison, simplest thing that works
# for i in nargs:
# argv = list(args)
#... |
Context manager for dealing with s3 errors in one place
bid: bucket_id in form of account_name:bucket_name
def bucket_ops(bid, api=""):
"""Context manager for dealing with s3 errors in one place
bid: bucket_id in form of account_name:bucket_name
"""
try:
yield 42
except ClientError as... |
Remove bits in content results to minimize memory utilization.
TODO: evolve this to a key filter on metadata, like date
def page_strip(page, versioned):
"""Remove bits in content results to minimize memory utilization.
TODO: evolve this to a key filter on metadata, like date
"""
# page strip fil... |
Scan all buckets in an account and schedule processing
def process_account(account_info):
"""Scan all buckets in an account and schedule processing"""
log = logging.getLogger('salactus.bucket-iterator')
log.info("processing account %s", account_info)
session = get_session(account_info)
client = ses... |
Process a collection of buckets.
For each bucket fetch location, versioning and size and
then kickoff processing strategy based on size.
def process_bucket_set(account_info, buckets):
"""Process a collection of buckets.
For each bucket fetch location, versioning and size and
then kickoff processi... |
Select and dispatch an object source for a bucket.
Choices are bucket partition, inventory, or direct pagination.
def dispatch_object_source(client, account_info, bid, bucket_info):
"""Select and dispatch an object source for a bucket.
Choices are bucket partition, inventory, or direct pagination.
""... |
Use set of keys as selector for character superset
Note this isn't optimal, its probabilistic on the keyset char population.
def get_keys_charset(keys, bid):
""" Use set of keys as selector for character superset
Note this isn't optimal, its probabilistic on the keyset char population.
"""
# use ... |
Try to detect the best partitioning strategy for a large bucket
Consider nested buckets with common prefixes, and flat buckets.
def detect_partition_strategy(bid, delimiters=('/', '-'), prefix=''):
"""Try to detect the best partitioning strategy for a large bucket
Consider nested buckets with common pref... |
Split up a bucket keyspace into smaller sets for parallel iteration.
def process_bucket_partitions(
bid, prefix_set=('',), partition='/', strategy=None, limit=4):
"""Split up a bucket keyspace into smaller sets for parallel iteration.
"""
if strategy is None:
return detect_partition_strateg... |
Load last inventory dump and feed as key source.
def process_bucket_inventory(bid, inventory_bucket, inventory_prefix):
"""Load last inventory dump and feed as key source.
"""
log.info("Loading bucket %s keys from inventory s3://%s/%s",
bid, inventory_bucket, inventory_prefix)
account, buc... |
Bucket pagination
def process_bucket_iterator(bid, prefix="", delimiter="", **continuation):
"""Bucket pagination
"""
log.info("Iterating keys bucket %s prefix %s delimiter %s",
bid, prefix, delimiter)
account, bucket = bid.split(':', 1)
region = connection.hget('bucket-regions', bid)... |
Retry support for resourcegroup tagging apis.
The resource group tagging api typically returns a 200 status code
with embedded resource specific errors. To enable resource specific
retry on throttles, we extract those, perform backoff w/ jitter and
continue. Other errors are immediately raised.
We... |
Returns a list of tags from resource and user supplied in
the format: [{'Key': 'key', 'Value': 'value'}]
Due to drift on implementation on copy-tags/tags used throughout
the code base, the following options are supported:
copy_tags (Tags to copy from the resource):
- list of str, e.g. ['... |
Move source tag value to destination tag value
- Collect value from old tag
- Delete old tag
- Create new tag & assign stored value
def process_rename(self, client, tag_value, resource_set):
"""
Move source tag value to destination tag value
- Collect value from old ta... |
Transform tag value
- Collect value from tag
- Transform Tag value
- Assign new value for key
def process_transform(self, tag_value, resource_set):
"""
Transform tag value
- Collect value from tag
- Transform Tag value
- Assign new value for key
... |
Returns a mapping of {resource_id: {tagkey: tagvalue}}
def get_resource_tag_map(self, r_type, ids):
"""
Returns a mapping of {resource_id: {tagkey: tagvalue}}
"""
manager = self.manager.get_resource_manager(r_type)
r_id = manager.resource_type.id
# TODO only fetch resour... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.