text
stringlengths
81
112k
Save object to the database. Removes all other entries if there are any. def save(self, *args, **kwargs): """ Save object to the database. Removes all other entries if there are any. """ self.__class__.objects.exclude(id=self.id).delete() super(SingletonModel, se...
Get the mappings from MAGICC to OpenSCM regions. This is not a pure inverse of the other way around. For example, we never provide "GLOBAL" as a MAGICC return value because it's unnecesarily confusing when we also have "World". Fortunately MAGICC doesn't ever read the name "GLOBAL" so this shouldn't ma...
Convert MAGICC regions to OpenSCM regions Parameters ---------- regions : list_like, str Regions to convert inverse : bool If True, convert the other way i.e. convert OpenSCM regions to MAGICC7 regions Returns ------- ``type(regions)`` Set of converted regi...
Get the mappings from MAGICC7 to OpenSCM variables. Parameters ---------- inverse : bool If True, return the inverse mappings i.e. OpenSCM to MAGICC7 mappings Returns ------- dict Dictionary of mappings def get_magicc7_to_openscm_variable_mapping(inverse=False): """Get the...
Convert MAGICC7 variables to OpenSCM variables Parameters ---------- variables : list_like, str Variables to convert inverse : bool If True, convert the other way i.e. convert OpenSCM variables to MAGICC7 variables Returns ------- ``type(variables)`` Set of...
Get the mappings from MAGICC6 to MAGICC7 variables. Note that this mapping is not one to one. For example, "HFC4310", "HFC43-10" and "HFC-43-10" in MAGICC6 both map to "HFC4310" in MAGICC7 but "HFC4310" in MAGICC7 maps back to "HFC4310". Note that HFC-245fa was mistakenly labelled as HFC-245ca in MAGI...
Convert MAGICC6 variables to MAGICC7 variables Parameters ---------- variables : list_like, str Variables to convert inverse : bool If True, convert the other way i.e. convert MAGICC7 variables to MAGICC6 variables Raises ------ ValueError If you try to con...
Get the mappings from Pint to Fortran safe units. Fortran can't handle special characters like "^" or "/" in names, but we need these in Pint. Conversely, Pint stores variables with spaces by default e.g. "Mt CO2 / yr" but we don't want these in the input files as Fortran is likely to think the whitesp...
Convert Pint units to Fortran safe units Parameters ---------- units : list_like, str Units to convert inverse : bool If True, convert the other way i.e. convert Fortran safe units to Pint units Returns ------- ``type(units)`` Set of converted units def convert_pi...
Overrides the base evaluation to set the value to the evaluation result of the value expression in the schema def run_evaluate(self) -> None: """ Overrides the base evaluation to set the value to the evaluation result of the value expression in the schema """ result = No...
Sets the value of a key to a supplied value def set(self, key: Any, value: Any) -> None: """ Sets the value of a key to a supplied value """ if key is not None: self[key] = value
Increments the value set against a key. If the key is not present, 0 is assumed as the initial state def increment(self, key: Any, by: int = 1) -> None: """ Increments the value set against a key. If the key is not present, 0 is assumed as the initial state """ if key is not None: self[ke...
Inserts an item to the list as long as it is not None def insert(self, index: int, obj: Any) -> None: """ Inserts an item to the list as long as it is not None """ if obj is not None: super().insert(index, obj)
Get the THISFILE_DATTYPE and THISFILE_REGIONMODE flags for a given region set. In all MAGICC input files, there are two flags: THISFILE_DATTYPE and THISFILE_REGIONMODE. These tell MAGICC how to read in a given input file. This function maps the regions which are in a given file to the value of these flags ...
Get the region order expected by MAGICC. Parameters ---------- regions : list_like The regions to get THISFILE_DATTYPE and THISFILE_REGIONMODE flags for. scen7 : bool, optional Whether the file we are getting the flags for is a SCEN7 file or not. Returns ------- list ...
Get special code for MAGICC6 SCEN files. At the top of every MAGICC6 and MAGICC5 SCEN file there is a two digit number. The first digit, the 'scenfile_region_code' tells MAGICC how many regions data is being provided for. The second digit, the 'scenfile_emissions_code', tells MAGICC which gases are in ...
Determine the tool to use for reading/writing. The function uses an internally defined set of mappings between filepaths, regular expresions and readers/writers to work out which tool to use for a given task, given the filepath. It is intended for internal use only, but is public because of its im...
Pull out a single config set from a parameters_out namelist. This function returns a single file with the config that needs to be passed to MAGICC in order to do the same run as is represented by the values in ``parameters_out``. Parameters ---------- parameters_out : dict, f90nml.Namelist ...
Pull out a single config set from a MAGICC ``PARAMETERS.OUT`` file. This function reads in the ``PARAMETERS.OUT`` file and returns a single file with the config that needs to be passed to MAGICC in order to do the same run as is represented by the values in ``PARAMETERS.OUT``. Parameters ---------...
Convert an RCP name into the generic Pymagicc RCP name The conversion is case insensitive. Parameters ---------- inname : str The name for which to get the generic Pymagicc RCP name Returns ------- str The generic Pymagicc RCP name Examples -------- >>> get_ge...
Join two sets of timeseries Parameters ---------- base : :obj:`MAGICCData`, :obj:`pd.DataFrame`, filepath Base timeseries to use. If a filepath, the data will first be loaded from disk. overwrite : :obj:`MAGICCData`, :obj:`pd.DataFrame`, filepath Timeseries to join onto base. Any point...
Read a MAGICC .SCEN file. Parameters ---------- filepath : str Filepath of the .SCEN file to read columns : dict Passed to ``__init__`` method of MAGICCData. See ``MAGICCData.__init__`` for details. kwargs Passed to ``__init__`` method of MAGICCData. See ``...
Determine the OpenSCM variable from a filepath. Uses MAGICC's internal, implicit, filenaming conventions. Parameters ---------- filepath : str Filepath from which to determine the OpenSCM variable. Returns ------- str The OpenSCM variable implied by the filepath. def _get...
Find the start and end of the embedded namelist. Returns ------- (int, int) start and end index for the namelist def _find_nml(self): """ Find the start and end of the embedded namelist. Returns ------- (int, int) start and end i...
Extract the tabulated data from the input file. Parameters ---------- stream : Streamlike object A Streamlike object (nominally StringIO) containing the table to be extracted metadata : dict Metadata read in from the header and the namelist R...
stream : Streamlike object A Streamlike object (nominally StringIO) containing the data to be extracted ch : dict Column headers to use for the output pd.DataFrame Returns ------- :obj:`pd.DataFrame` Dataframe with processed datablock de...
Determine the file variable from the filepath. Returns ------- str Best guess of variable name from the filepath def _get_variable_from_filepath(self): """ Determine the file variable from the filepath. Returns ------- str Best g...
Parse the header for additional metadata. Parameters ---------- header : str All the lines in the header. Returns ------- dict The metadata in the header. def process_header(self, header): """ Parse the header for additional meta...
Read a data header line, ensuring that it starts with the expected header Parameters ---------- stream : :obj:`StreamIO` Stream object containing the text to read expected_header : str, list of strs Expected header of the data header line def _read_data_header_...
Read out the next chunk of memory Values in fortran binary streams begin and end with the number of bytes :param t: Data type (same format as used by struct). :return: Numpy array if the variable is an array, otherwise a scalar. def read_chunk(self, t): """ Read out the next ch...
Extract the tabulated data from the input file # Arguments stream (Streamlike object): A Streamlike object (nominally StringIO) containing the table to be extracted metadata (dict): metadata read in from the header and the namelist # Returns df (pandas.DataFrame): c...
Reads the first part of the file to get some essential metadata # Returns return (dict): the metadata in the header def process_header(self, data): """ Reads the first part of the file to get some essential metadata # Returns return (dict): the metadata in the header ...
Write a MAGICC input file from df and metadata Parameters ---------- magicc_input : :obj:`pymagicc.io.MAGICCData` MAGICCData object which holds the data to write filepath : str Filepath of the file to write to. def write(self, magicc_input, filepath): "...
Append any input which can be converted to MAGICCData to self. Parameters ---------- other : MAGICCData, pd.DataFrame, pd.Series, str Source of data to append. inplace : bool If True, append ``other`` inplace, otherwise return a new ``MAGICCData`` in...
Write an input file to disk. Parameters ---------- filepath : str Filepath of the file to write. magicc_version : int The MAGICC version for which we want to write files. MAGICC7 and MAGICC6 namelists are incompatible hence we need to know which one ...
Validates a set of attributes as identifiers in a spec def validate_python_identifier_attributes(fully_qualified_name: str, spec: Dict[str, Any], *attributes: str) -> List[InvalidIdentifierError]: """ Validates a set of attributes as identifiers in a spec """ errors: L...
Validates to ensure that a set of attributes are present in spec def validate_required_attributes(fully_qualified_name: str, spec: Dict[str, Any], *attributes: str) -> List[RequiredAttributeError]: """ Validates to ensure that a set of attributes are present in spec """ return ...
Validates to ensure that a set of attributes do not contain empty values def validate_empty_attributes(fully_qualified_name: str, spec: Dict[str, Any], *attributes: str) -> List[EmptyAttributeError]: """ Validates to ensure that a set of attributes do not contain empty values """ ...
Validates to ensure that the value is a number of the specified type, and lies with the specified range def validate_number_attribute( fully_qualified_name: str, spec: Dict[str, Any], attribute: str, value_type: Union[Type[int], Type[float]] = int, minimum: Optional[Union[int, f...
Validates to ensure that the value of an attribute lies within an allowed set of candidates def validate_enum_attribute(fully_qualified_name: str, spec: Dict[str, Any], attribute: str, candidates: Set[Union[str, int, float]]) -> Optional[InvalidValueError]: """ Validates to ensure that ...
Parses a flat key string and returns a key def parse(key_string: str) -> 'Key': """ Parses a flat key string and returns a key """ parts = key_string.split(Key.PARTITION) key_type = KeyType.DIMENSION if parts[3]: key_type = KeyType.TIMESTAMP return Key(key_type, part...
Parses a flat key string and returns a key def parse_sort_key(identity: str, sort_key_string: str) -> 'Key': """ Parses a flat key string and returns a key """ parts = sort_key_string.split(Key.PARTITION) key_type = KeyType.DIMENSION if parts[2]: key_type = KeyType.TIMESTAMP...
Checks if this key starts with the other key provided. Returns False if key_type, identity or group are different. For `KeyType.TIMESTAMP` returns True. For `KeyType.DIMENSION` does prefix match between the two dimensions property. def starts_with(self, other: 'Key') -> bool: """ ...
Evaluates the dimension fields. Returns False if any of the fields could not be evaluated. def _evaluate_dimension_fields(self) -> bool: """ Evaluates the dimension fields. Returns False if any of the fields could not be evaluated. """ for _, item in self._dimension_fields.items(): ...
Compares the dimension field values to the value in regular fields. def _compare_dimensions_to_fields(self) -> bool: """ Compares the dimension field values to the value in regular fields.""" for name, item in self._dimension_fields.items(): if item.value != self._nested_items[name].value: ...
Generates the Key object based on dimension fields. def _key(self): """ Generates the Key object based on dimension fields. """ return Key(self._schema.key_type, self._identity, self._name, [str(item.value) for item in self._dimension_fields.values()])
Run a MAGICC scenario and return output data and (optionally) config parameters. As a reminder, putting ``out_parameters=1`` will cause MAGICC to write out its parameters into ``out/PARAMETERS.OUT`` and they will then be read into ``output.metadata["parameters"]`` where ``output`` is the returned object. ...
Evaluates the anchor condition against the specified block. :param block: Block to run the anchor condition against. :return: True, if the anchor condition is met, otherwise, False. def run_evaluate(self, block: TimeAggregate) -> bool: """ Evaluates the anchor condition against the spec...
Injects the block start and end times def extend_schema_spec(self) -> None: """ Injects the block start and end times """ super().extend_schema_spec() if self.ATTRIBUTE_FIELDS in self._spec: # Add new fields to the schema spec. Since `_identity` is added by the super, new elements ...
Constructs the spec for predefined fields that are to be included in the master spec prior to schema load :param name_in_context: Name of the current object in the context :return: def _build_time_fields_spec(name_in_context: str) -> List[Dict[str, Any]]: """ Constructs the spec for pre...
Apply a number of substitutions to a string(s). The substitutions are applied effectively all at once. This means that conflicting substitutions don't interact. Where substitutions are conflicting, the one which is longer takes precedance. This is confusing so we recommend that you look at the examples...
Returns the list of items from the store based on the given time range or count. :param base_key: Items which don't start with the base_key are filtered out. :param start_time: Start time to for the range query :param end_time: End time of the range query. If None count is used. :param c...
Returns the list of items from the store based on the given time range or count. This is used when the key being used is a TIMESTAMP key. def _get_range_timestamp_key(self, start: Key, end: Key, count: int = 0) -> List[Tuple[Key, Any]]: """ Returns the list of ...
Returns the list of items from the store based on the given time range or count. This is used when the key being used is a DIMENSION key. def _get_range_dimension_key(self, base_key: Key, start_time: datetime, e...
Restricts items to count number if len(items) is larger than abs(count). This function assumes that items is sorted by time. :param items: The items to restrict. :param count: The number of items returned. def _restrict_items_to_count(items: List[Tuple[Key, Any]], count: int) -> List[Tuple[Key...
Builds an expression object. Adds an error if expression creation has errors. def build_expression(self, attribute: str) -> Optional[Expression]: """ Builds an expression object. Adds an error if expression creation has errors. """ expression_string = self._spec.get(attribute, None) if expre...
Adds errors to the error repository in schema loader def add_errors(self, *errors: Union[BaseSchemaError, List[BaseSchemaError]]) -> None: """ Adds errors to the error repository in schema loader """ self.schema_loader.add_errors(*errors)
Validates that the schema contains a series of required attributes def validate_required_attributes(self, *attributes: str) -> None: """ Validates that the schema contains a series of required attributes """ self.add_errors( validate_required_attributes(self.fully_qualified_name, self._spec...
Validates that the attribute contains a numeric value within boundaries if specified def validate_number_attribute(self, attribute: str, value_type: Union[Type[int], Type[float]] = int, minimum: Optional[Union[int, fl...
Validates that the attribute value is among the candidates def validate_enum_attribute(self, attribute: str, candidates: Set[Union[str, int, float]]) -> None: """ Validates that the attribute value is among the candidates """ self.add_errors( validate_enum_at...
Contains the validation routines that are to be executed as part of initialization by subclasses. When this method is being extended, the first line should always be: ```super().validate_schema_spec()``` def validate_schema_spec(self) -> None: """ Contains the validation routines that are to be execute...
Returns True when: 1. Where clause is not specified 2. Where WHERE clause is specified and it evaluates to True Returns false if a where clause is specified and it evaluates to False def _needs_evaluation(self) -> bool: """ Returns True when: 1. Where clause ...
Evaluates the current item :returns An evaluation result object containing the result, or reasons why evaluation failed def run_evaluate(self, *args, **kwargs) -> None: """ Evaluates the current item :returns An evaluation result object containing the result, or reasons why ...
Implements snapshot for collections by recursively invoking snapshot of all child items def _snapshot(self) -> Dict[str, Any]: """ Implements snapshot for collections by recursively invoking snapshot of all child items """ try: return {name: item._snapshot for name, item in ...
Restores the state of a collection from a snapshot def run_restore(self, snapshot: Dict[Union[str, Key], Any]) -> 'BaseItemCollection': """ Restores the state of a collection from a snapshot """ try: for name, snap in snapshot.items(): if isinstance(name, Ke...
Prepares window if any is specified. :param start_time: The anchor block start_time from where the window should be generated. def _prepare_window(self, start_time: datetime) -> None: """ Prepares window if any is specified. :param start_time: The anchor block start_time from wh...
Generates the end time to be used for the store range query. :param start_time: Start time to use as an offset to calculate the end time based on the window type in the schema. :return: def _get_end_time(self, start_time: datetime) -> datetime: """ Generates the end time to be u...
Converts [(Key, block)] to [BlockAggregate] :param blocks: List of (Key, block) blocks. :return: List of BlockAggregate def _load_blocks(self, blocks: List[Tuple[Key, Any]]) -> List[TimeAggregate]: """ Converts [(Key, block)] to [BlockAggregate] :param blocks: List of (Key, bloc...
Executes the streaming and window BTS on the given records. An option old state can provided which initializes the state for execution. This is useful for batch execution where the previous state is written out to storage and can be loaded for the next batch run. :param identity: Identity of th...
Uses the given iteratable events and the data processor convert the event into a list of Records along with its identity and time. :param events: iteratable events. :param data_processor: DataProcessor to process each event in events. :return: yields Tuple[Identity, TimeAndRecord] for al...
String representation with additional information def to_string(self, hdr, other): """String representation with additional information""" result = "%s[%s,%s" % ( hdr, self.get_type(self.type), self.get_clazz(self.clazz)) if self.unique: result += "-unique," ...
Returns true if the question is answered by the record def answered_by(self, rec): """Returns true if the question is answered by the record""" return self.clazz == rec.clazz and \ (self.type == rec.type or self.type == _TYPE_ANY) and \ self.name == rec.name
Sets this record's TTL and created time to that of another record. def reset_ttl(self, other): """Sets this record's TTL and created time to that of another record.""" self.created = other.created self.ttl = other.ttl
String representation with addtional information def to_string(self, other): """String representation with addtional information""" arg = "%s/%s,%s" % ( self.ttl, self.get_remaining_ttl(current_time_millis()), other) return DNSEntry.to_string(self, "record", arg)
Used in constructing an outgoing packet def write(self, out): """Used in constructing an outgoing packet""" out.write_string(self.address, len(self.address))
Used in constructing an outgoing packet def write(self, out): """Used in constructing an outgoing packet""" out.write_string(self.cpu, len(self.cpu)) out.write_string(self.os, len(self.os))
Used in constructing an outgoing packet def write(self, out): """Used in constructing an outgoing packet""" out.write_string(self.text, len(self.text))
Update only one property in the dict def set_property(self, key, value): """ Update only one property in the dict """ self.properties[key] = value self.sync_properties()
Used in constructing an outgoing packet def write(self, out): """Used in constructing an outgoing packet""" out.write_short(self.priority) out.write_short(self.weight) out.write_short(self.port) out.write_name(self.server)
Reads header portion of packet def read_header(self): """Reads header portion of packet""" format = '!HHHHHH' length = struct.calcsize(format) info = struct.unpack(format, self.data[self.offset:self.offset + length]) self.offset += length self.id = info[...
Reads questions section of packet def read_questions(self): """Reads questions section of packet""" format = '!HH' length = struct.calcsize(format) for i in range(0, self.num_questions): name = self.read_name() info = struct.unpack(format, sel...
Reads an integer from the packet def read_int(self): """Reads an integer from the packet""" format = '!I' length = struct.calcsize(format) info = struct.unpack(format, self.data[self.offset:self.offset + length]) self.offset += length return info[0]
Reads a character string from the packet def read_character_string(self): """Reads a character string from the packet""" length = ord(self.data[self.offset]) self.offset += 1 return self.read_string(length)
Reads a string of a given length from the packet def read_string(self, len): """Reads a string of a given length from the packet""" format = '!' + str(len) + 's' length = struct.calcsize(format) info = struct.unpack(format, self.data[self.offset:self.offset + length]) ...
Reads the answers, authorities and additionals section of the packet def read_others(self): """Reads the answers, authorities and additionals section of the packet""" format = '!HHiH' length = struct.calcsize(format) n = self.num_answers + self.num_authorities + self.num...
Reads a UTF-8 string of a given length from the packet def read_utf(self, offset, len): """Reads a UTF-8 string of a given length from the packet""" try: result = self.data[offset:offset + len].decode('utf-8') except UnicodeDecodeError: result = str('') return re...
Reads a domain name from the packet def read_name(self): """Reads a domain name from the packet""" result = '' off = self.offset next = -1 first = off while 1: len = ord(self.data[off]) off += 1 if len == 0: break ...
Adds an answer def add_answer(self, inp, record): """Adds an answer""" if not record.suppressed_by(inp): self.add_answer_at_time(record, 0)
Adds an answer if if does not expire by a certain time def add_answer_at_time(self, record, now): """Adds an answer if if does not expire by a certain time""" if record is not None: if now == 0 or not record.is_expired(now): self.answers.append((record, now)) ...
Writes a single byte to the packet def write_byte(self, value): """Writes a single byte to the packet""" format = '!B' self.data.append(struct.pack(format, value)) self.size += 1
Inserts an unsigned short in a certain position in the packet def insert_short(self, index, value): """Inserts an unsigned short in a certain position in the packet""" format = '!H' self.data.insert(index, struct.pack(format, value)) self.size += 2
Writes an unsigned integer to the packet def write_int(self, value): """Writes an unsigned integer to the packet""" format = '!I' self.data.append(struct.pack(format, int(value))) self.size += 4
Writes a string to the packet def write_string(self, value, length): """Writes a string to the packet""" format = '!' + str(length) + 's' self.data.append(struct.pack(format, value)) self.size += length
Writes a UTF-8 string of a given length to the packet def write_utf(self, s): """Writes a UTF-8 string of a given length to the packet""" utfstr = s.encode('utf-8') length = len(utfstr) if length > 64: raise NamePartTooLongException self.write_byte(length) se...
Writes a domain name to the packet def write_name(self, name): """Writes a domain name to the packet""" try: # Find existing instance of this name in packet # index = self.names[name] except KeyError: # No record of this name already, so write it...
Writes a question to the packet def write_question(self, question): """Writes a question to the packet""" self.write_name(question.name) self.write_short(question.type) self.write_short(question.clazz)
Writes a record (answer, authoritative answer, additional) to the packet def write_record(self, record, now): """Writes a record (answer, authoritative answer, additional) to the packet""" self.write_name(record.name) self.write_short(record.type) if record.unique and se...
Returns a string containing the packet's bytes No further parts should be added to the packet once this is done. def packet(self): """Returns a string containing the packet's bytes No further parts should be added to the packet once this is done.""" if not self.finishe...
Adds an entry def add(self, entry): """Adds an entry""" if self.get(entry) is not None: return try: list = self.cache[entry.key] except: list = self.cache[entry.key] = [] list.append(entry)
Adds and sign an entry def sign(self, entry, signer=None): """Adds and sign an entry""" if (self.get(entry) is not None): return if (entry.rrsig is None) and (self.private is not None): entry.rrsig = DNSSignatureS(entry.name, _TYPE_RRSIG, _CLASS_IN, e...