text stringlengths 81 112k |
|---|
Convert convolution layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_convolution1d(builder, layer, input_names, output_names, keras_layer):
"""
Convert convolut... |
Convert separable convolution layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_separable_convolution(builder, layer, input_names, output_names, keras_layer):
"""
... |
Convert a Batch Normalization layer.
Parameters
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_batchnorm(builder, layer, input_names, output_names, keras_layer):
"""
Convert a Batch Normalization layer.
Para... |
Convert a flatten layer from keras to coreml.
----------
Parameters
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_flatten(builder, layer, input_names, output_names, keras_layer):
"""
Convert a flatten layer f... |
Convert concat layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_merge(builder, layer, input_names, output_names, keras_layer):
"""
Convert concat layer from ker... |
Convert pooling layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_pooling(builder, layer, input_names, output_names, keras_layer):
"""
Convert pooling layer from... |
Convert padding layer from keras to coreml.
Keras only supports zero padding at this time.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_padding(builder, layer, input_names, output_names, ke... |
Convert padding layer from keras to coreml.
Keras only supports zero padding at this time.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_cropping(builder, layer, input_names, output_names, k... |
Convert convolution layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_upsample(builder, layer, input_names, output_names, keras_layer):
"""
Convert convolution l... |
Convert a softmax layer from keras to coreml.
Parameters
keras_layer: layer
----------
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_permute(builder, layer, input_names, output_names, keras_layer):
"""
Convert a softmax layer ... |
Convert an SimpleRNN layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_simple_rnn(builder, layer, input_names, output_names, keras_layer):
"""
Convert an SimpleR... |
Convert an LSTM layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_lstm(builder, layer, input_names, output_names, keras_layer):
"""
Convert an LSTM layer from ke... |
Convert a GRU layer from keras to coreml.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_gru(builder, layer, input_names, output_names, keras_layer):
"""
Convert a GRU layer from keras t... |
Convert a bidirectional layer from keras to coreml.
Currently assumes the units are LSTMs.
Parameters
----------
keras_layer: layer
A keras layer object.
builder: NeuralNetworkBuilder
A neural network builder object.
def convert_bidirectional(builder, layer, input_names, output_na... |
obj[:]
def SLICE_0(self, instr):
'obj[:]'
value = self.ast_stack.pop()
kw = dict(lineno=instr.lineno, col_offset=0)
slice = _ast.Slice(lower=None, step=None, upper=None, **kw)
subscr = _ast.Subscript(value=value, slice=slice, ctx=_ast.Load(), **kw)
self.ast_stack.appen... |
obj[lower:] = expr
def STORE_SLICE_1(self, instr):
'obj[lower:] = expr'
lower = self.ast_stack.pop()
value = self.ast_stack.pop()
expr = self.ast_stack.pop()
kw = dict(lineno=instr.lineno, col_offset=0)
slice = _ast.Slice(lower=lower, step=None, upper=None, **kw)
... |
obj[lower:upper] = expr
def STORE_SLICE_3(self, instr):
'obj[lower:upper] = expr'
upper = self.ast_stack.pop()
lower = self.ast_stack.pop()
value = self.ast_stack.pop()
expr = self.ast_stack.pop()
kw = dict(lineno=instr.lineno, col_offset=0)
slice = _as... |
obj[:] = expr
def DELETE_SLICE_0(self, instr):
'obj[:] = expr'
value = self.ast_stack.pop()
kw = dict(lineno=instr.lineno, col_offset=0)
slice = _ast.Slice(lower=None, step=None, upper=None, **kw)
subscr = _ast.Subscript(value=value, slice=slice, ctx=_ast.Del(), **kw)
... |
Create a content-based recommender model in which the similarity
between the items recommended is determined by the content of
those items rather than learned from user interaction data.
The similarity score between two items is calculated by first
computing the similarity between the item data for eac... |
Return a set of symbols in `node` that are assigned.
:param node: ast node
:returns: set of strings.
def lhs(node):
'''
Return a set of symbols in `node` that are assigned.
:param node: ast node
:returns: set of strings.
'''
gen = ConditionalSymbolVisitor()
if... |
Group outputs into conditional and stable
:param node: ast node
:returns: tuple of (conditional, stable)
def conditional_lhs(node):
'''
Group outputs into conditional and stable
:param node: ast node
:returns: tuple of (conditional, stable)
'''
gen = ConditionalSymbolV... |
Group lhs and rhs into conditional, stable and undefined
:param node: ast node
:returns: tuple of (conditional_lhs, stable_lhs),(conditional_rhs, stable_rhs), undefined
def conditional_symbols(node):
'''
Group lhs and rhs into conditional, stable and undefined
:param node: ast node
... |
Load rabit library.
def _loadlib(lib='standard'):
"""Load rabit library."""
global _LIB
if _LIB is not None:
warnings.warn('rabit.int call was ignored because it has'\
' already been initialized', level=2)
return
if lib == 'standard':
_LIB = ctypes.cdll... |
Intialize the rabit module, call this once before using anything.
Parameters
----------
args: list of str, optional
The list of arguments used to initialized the rabit
usually you need to pass in sys.argv.
Defaults to sys.argv when it is None.
lib: {'standard', 'mock', 'mpi'}
... |
Print message to the tracker.
This function can be used to communicate the information of
the progress to the tracker
Parameters
----------
msg : str
The message to be printed to tracker.
def tracker_print(msg):
"""Print message to the tracker.
This function can be used to commun... |
Perform allreduce, return the result.
Parameters
----------
data: numpy array
Input data.
op: int
Reduction operators, can be MIN, MAX, SUM, BITOR
prepare_fun: function
Lazy preprocessing function, if it is not None, prepare_fun(data)
will be called by the function b... |
Internal function used by the module,
unpickle a model from a buffer specified by ptr, length
Arguments:
ptr: ctypes.POINTER(ctypes._char)
pointer to the memory region of buffer
length: int
the length of buffer
def _load_model(ptr, length):
"""
Internal function ... |
Load latest check point.
Parameters
----------
with_local: bool, optional
whether the checkpoint contains local model
Returns
-------
tuple : tuple
if with_local: return (version, gobal_model, local_model)
else return (version, gobal_model)
if returned version =... |
Checkpoint the model.
This means we finished a stage of execution.
Every time we call check point, there is a version number which will increase by one.
Parameters
----------
global_model: anytype that can be pickled
globally shared model/state when calling this function,
the calle... |
Converts object detection annotations (ground truth or predictions) to
stacked format (an `SFrame` where each row is one object instance).
Parameters
----------
annotations_sarray: SArray
An `SArray` with unstacked predictions, exactly formatted as the
annotations column when training a... |
Converts object detection annotations (ground truth or predictions) to
unstacked format (an `SArray` where each element is a list of object
instances).
Parameters
----------
annotations_sframe: SFrame
An `SFrame` with stacked predictions, produced by the
`stack_annotations` function... |
Create a RankingFactorizationRecommender that learns latent factors for each
user and item and uses them to make rating predictions.
Parameters
----------
observation_data : SFrame
The dataset to use for training the model. It must contain a column of
user ids and a column of item ids. ... |
splits _sources/reference.rst into separate files
def preprocess():
"splits _sources/reference.rst into separate files"
text = open("./_sources/reference.rst", "r").read()
os.remove("./_sources/reference.rst")
if not os.path.exists("./_sources/reference"):
os.makedirs("./_sources/reference")
... |
Returns an unsigned 32-bit integer that encodes the field number and
wire type information in standard protocol message wire format.
Args:
field_number: Expected to be an integer in the range [1, 1 << 29)
wire_type: One of the WIRETYPE_* constants.
def PackTag(field_number, wire_type):
"""Returns an uns... |
Returns the number of bytes required to serialize a single varint
using boundary value comparisons. (unrolled loop optimization -WPierce)
uint64 must be unsigned.
def _VarUInt64ByteSizeNoTag(uint64):
"""Returns the number of bytes required to serialize a single varint
using boundary value comparisons. (unrolle... |
Returns seconds as a human-friendly string, e.g. '1d 4h 47m 41s'
def _seconds_as_string(seconds):
"""
Returns seconds as a human-friendly string, e.g. '1d 4h 47m 41s'
"""
TIME_UNITS = [('s', 60), ('m', 60), ('h', 24), ('d', None)]
unit_strings = []
cur = max(int(seconds), 1)
for suffix, siz... |
Returns the module holding the conversion functions for a
particular model).
def _get_converter_module(sk_obj):
"""
Returns the module holding the conversion functions for a
particular model).
"""
try:
cv_idx = _converter_lookup[sk_obj.__class__]
except KeyError:
raise Value... |
Converts a generic sklearn pipeline, transformer, classifier, or regressor
into an coreML specification.
def _convert_sklearn_model(input_sk_obj, input_features = None,
output_feature_names = None, class_labels = None):
"""
Converts a generic sklearn pipeline, transformer, classi... |
Set the default prediction value(s).
The values given here form the base prediction value that the values
at activated leaves are added to. If values is a scalar, then
the output of the tree must also be 1 dimensional; otherwise, values
must be a list with length matching the dimension... |
r"""
Set the post processing transform applied after the prediction value
from the tree ensemble.
Parameters
----------
value: str
A value denoting the transform applied. Possible values are:
- "NoTransform" (default). Do not apply a transform.
... |
Add a branch node to the tree ensemble.
Parameters
----------
tree_id: int
ID of the tree to add the node to.
node_id: int
ID of the node within the tree.
feature_index: int
Index of the feature in the input being split on.
feature_... |
Add a leaf node to the tree ensemble.
Parameters
----------
tree_id: int
ID of the tree to add the node to.
node_id: int
ID of the node within the tree.
values: [float | int | list | dict]
Value(s) at the leaf node to add to the prediction w... |
Creates a new 'PropertySet' instance for the given raw properties,
or returns an already existing one.
def create (raw_properties = []):
""" Creates a new 'PropertySet' instance for the given raw properties,
or returns an already existing one.
"""
assert (is_iterable_typed(raw_properties, p... |
Creates new 'PropertySet' instances after checking
that all properties are valid and converting implicit
properties into gristed form.
def create_with_validation (raw_properties):
""" Creates new 'PropertySet' instances after checking
that all properties are valid and converting implicit
... |
Creates a property-set from the input given by the user, in the
context of 'jamfile-module' at 'location
def create_from_user_input(raw_properties, jamfile_module, location):
"""Creates a property-set from the input given by the user, in the
context of 'jamfile-module' at 'location'"""
assert is_iterab... |
Refines requirements with requirements provided by the user.
Specially handles "-<property>value" syntax in specification
to remove given requirements.
- parent-requirements -- property-set object with requirements
to refine
- specification -- string list of requirements provided by the use
... |
Returns properties that are neither incidental nor free.
def base (self):
""" Returns properties that are neither incidental nor free.
"""
result = [p for p in self.lazy_properties
if not(p.feature.incidental or p.feature.free)]
result.extend(self.base_)
return... |
Returns free properties which are not dependency properties.
def free (self):
""" Returns free properties which are not dependency properties.
"""
result = [p for p in self.lazy_properties
if not p.feature.incidental and p.feature.free]
result.extend(self.free_)
... |
Returns dependency properties.
def dependency (self):
""" Returns dependency properties.
"""
result = [p for p in self.lazy_properties if p.feature.dependency]
result.extend(self.dependency_)
return self.dependency_ |
Returns properties that are not dependencies.
def non_dependency (self):
""" Returns properties that are not dependencies.
"""
result = [p for p in self.lazy_properties if not p.feature.dependency]
result.extend(self.non_dependency_)
return result |
Returns incidental properties.
def incidental (self):
""" Returns incidental properties.
"""
result = [p for p in self.lazy_properties if p.feature.incidental]
result.extend(self.incidental_)
return result |
Refines this set's properties using the requirements passed as an argument.
def refine (self, requirements):
""" Refines this set's properties using the requirements passed as an argument.
"""
assert isinstance(requirements, PropertySet)
if requirements not in self.refined_:
... |
Computes the target path that should be used for
target with these properties.
Returns a tuple of
- the computed path
- if the path is relative to build directory, a value of
'true'.
def target_path (self):
""" Computes the target path that sh... |
Creates a new property set containing the properties in this one,
plus the ones of the property set passed as argument.
def add (self, ps):
""" Creates a new property set containing the properties in this one,
plus the ones of the property set passed as argument.
"""
ass... |
Returns all values of 'feature'.
def get (self, feature):
""" Returns all values of 'feature'.
"""
if type(feature) == type([]):
feature = feature[0]
if not isinstance(feature, b2.build.feature.Feature):
feature = b2.build.feature.get(feature)
assert isin... |
Returns all contained properties associated with 'feature
def get_properties(self, feature):
"""Returns all contained properties associated with 'feature'"""
if not isinstance(feature, b2.build.feature.Feature):
feature = b2.build.feature.get(feature)
assert isinstance(feature, b2.b... |
A unified interface for training recommender models. Based on simple
characteristics of the data, a type of model is selected and trained. The
trained model can be used to predict ratings and make recommendations.
To use specific options of a desired model, use the ``create`` function
of the correspond... |
Compare the prediction or recommendation performance of recommender models
on a common test dataset.
Models that are trained to predict ratings are compared separately from
models that are trained without target ratings. The ratings prediction
models are compared on root-mean-squared error, and the re... |
Compute precision and recall at a given cutoff for each user. In information
retrieval terms, precision represents the ratio of relevant, retrieved items
to the number of relevant items. Recall represents the ratio of relevant,
retrieved items to the number of relevant items.
Let :math:`p_k` be a vecto... |
Create a recommender-friendly train-test split of the provided data set.
The test dataset is generated by first choosing `max_num_users` out of the
total number of users in `dataset`. Then, for each of the chosen test users,
a portion of the user's items (determined by `item_test_proportion`) is
random... |
Get the current settings of the model. The keys depend on the type of
model.
Returns
-------
out : list
A list of fields that can be queried using the ``get`` method.
def _list_fields(self):
"""
Get the current settings of the model. The keys depend on the t... |
Returns a structured description of the model, including (where relevant)
the schema of the training data, description of the training data,
training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections... |
Set current options for a model.
Parameters
----------
options : dict
A dictionary of the desired option settings. The key should be the name
of the option and each value is the desired value of the option.
def _set_current_options(self, options):
"""
Se... |
Processes the dataset parameter for type correctness.
Returns it as an SFrame.
def __prepare_dataset_parameter(self, dataset):
"""
Processes the dataset parameter for type correctness.
Returns it as an SFrame.
"""
# Translate the dataset argument into the proper type
... |
Returns a dictionary of (column : type) for the data used in the
model.
def _get_data_schema(self):
"""
Returns a dictionary of (column : type) for the data used in the
model.
"""
if not hasattr(self, "_data_schema"):
response = self.__proxy__.get_data_schem... |
Return a score prediction for the user ids and item ids in the provided
data set.
Parameters
----------
dataset : SFrame
Dataset in the same form used for training.
new_observation_data : SFrame, optional
``new_observation_data`` gives additional observa... |
Get the k most similar items for each item in items.
Each type of recommender has its own model for the similarity
between items. For example, the item_similarity_recommender will
return the most similar items according to the user-chosen
similarity; the factorization_recommender will r... |
Get the k most similar users for each entry in `users`.
Each type of recommender has its own model for the similarity
between users. For example, the factorization_recommender will
return the nearest users based on the cosine similarity
between latent user factors. (This method is not ... |
Recommend the ``k`` highest scored items for each user.
Parameters
----------
users : SArray, SFrame, or list, optional
Users or observation queries for which to make recommendations.
For list, SArray, and single-column inputs, this is simply a set
of user I... |
Recommend the ``k`` highest scored items based on the
interactions given in `observed_items.`
Parameters
----------
observed_items : SArray, SFrame, or list
A list/SArray of items to use to make recommendations, or
an SFrame of items and optionally ratings and/or... |
Compute a model's precision and recall scores for a particular dataset.
Parameters
----------
dataset : SFrame
An SFrame in the same format as the one used during training.
This will be compared to the model's recommendations, which exclude
the (user, item) p... |
Evaluate the prediction error for each user-item pair in the given data
set.
Parameters
----------
dataset : SFrame
An SFrame in the same format as the one used during training.
target : str
The name of the target rating column in `dataset`.
Ret... |
r"""
Evaluate the model's ability to make rating predictions or
recommendations.
If the model is trained to predict a particular target, the
default metric used for model comparison is root-mean-squared error
(RMSE). Suppose :math:`y` and :math:`\widehat{y}` are vectors of lengt... |
Returns a new popularity model matching the data set this model was
trained with. Can be used for comparison purposes.
def _get_popularity_baseline(self):
"""
Returns a new popularity model matching the data set this model was
trained with. Can be used for comparison purposes.
... |
For a collection of item -> item pairs, returns information about the
users in that intersection.
Parameters
----------
item_pairs : 2-column SFrame of two item columns, or a list of
(item_1, item_2) tuples.
Returns
-------
out : SFrame
A ... |
Export the model in Core ML format.
Parameters
----------
filename: str
A valid filename where the model can be saved.
Examples
--------
>>> model.export_coreml('myModel.mlmodel')
def export_coreml(self, filename):
"""
Export the model in Core... |
Evaluate the model on the given dataset.
Parameters
----------
dataset : SFrame
Dataset in the same format used for training. The columns names and
types of the dataset must be the same as that used in training.
metric : str, optional
Name of the ev... |
Predict the target column of the given dataset.
The target column is provided during
:func:`~turicreate.random_forest_regression.create`. If the target column is in the
`dataset` it will be ignored.
Parameters
----------
dataset : SFrame
A dataset that has the... |
Creates a feature vectorizer from input features, return the spec for
a feature vectorizer that puts everything into a single array of length
equal to the total size of all the input features. Returns a 2-tuple
`(spec, num_dimension)`
Parameters
----------
input_features: [list of 2-tuples]
... |
Read in the Boost version from a given boost_root.
def query_boost_version(boost_root):
'''
Read in the Boost version from a given boost_root.
'''
boost_version = None
if os.path.exists(os.path.join(boost_root,'Jamroot')):
with codecs.open(os.path.join(boost_root,'Ja... |
This clone mimicks the way Travis-CI clones a project's repo. So far
Travis-CI is the most limiting in the sense of only fetching partial
history of the repo.
def git_clone(sub_repo, branch, commit = None, cwd = None, no_submodules = False):
'''
This clone mimicks the way Travis-CI clon... |
Installs specific toolset on CI system.
def install_toolset(self, toolset):
'''
Installs specific toolset on CI system.
'''
info = toolset_info[toolset]
if sys.platform.startswith('linux'):
os.chdir(self.work_dir)
if 'ppa' in info:
for ppa... |
Create a :class:`~turicreate.svm_classifier.SVMClassifier` to predict the class of a binary
target variable based on a model of which side of a hyperplane the example
falls on. In addition to standard numeric and categorical types, features
can also be extracted automatically from list- or dictionary-type S... |
Return a classification, for each example in the ``dataset``, using the
trained SVM model. The output SFrame contains predictions
as class labels (0 or 1) associated with the the example.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must includ... |
Get the right converter function for Keras
def _get_layer_converter_fn(layer, add_custom_layers = False):
"""Get the right converter function for Keras
"""
layer_type = type(layer)
if layer_type in _KERAS_LAYER_REGISTRY:
convert_func = _KERAS_LAYER_REGISTRY[layer_type]
if convert_func i... |
Load a keras model from disk
Parameters
----------
model_network_path: str
Path where the model network path is (json file)
model_weight_path: str
Path where the model network weights are (hd5 file)
custom_objects:
A dictionary of layers or other custom classes
or ... |
A method for displaying the Plot object
Notes
-----
- The plot will render either inline in a Jupyter Notebook, or in a
native GUI window, depending on the value provided in
`turicreate.visualization.set_target` (defaults to 'auto').
Examples
--------
... |
A method for saving the Plot object in a vega representation
Parameters
----------
filepath: string
The destination filepath where the plot object must be saved as.
The extension of this filepath determines what format the plot will
be saved as. Currently sup... |
customized submit script, that submit nslave jobs, each must contain args as parameter
note this can be a lambda function containing additional parameters in input
Parameters
nslave number of slave process to start up
args arguments to launch each job
this usually includes th... |
Get the right value from the scikit-tree
def _get_value(scikit_value, mode = 'regressor', scaling = 1.0, n_classes = 2, tree_index = 0):
""" Get the right value from the scikit-tree
"""
# Regression
if mode == 'regressor':
return scikit_value[0] * scaling
# Binary classification
if n_c... |
Traverse through the tree and append to the tree spec.
def _recurse(coreml_tree, scikit_tree, tree_id, node_id, scaling = 1.0, mode = 'regressor',
n_classes = 2, tree_index = 0):
"""Traverse through the tree and append to the tree spec.
"""
if not(HAS_SKLEARN):
raise RuntimeError('scik... |
Convert a generic tree regressor model to the protobuf spec.
This currently supports:
* Decision tree regression
* Gradient boosted tree regression
* Random forest regression
* Decision tree classifier.
* Gradient boosted tree classifier.
* Random forest classifier.
-------... |
Takes images scaled to [0, 1] and returns them appropriately scaled and
mean-subtracted for VGG-16
def _vgg16_data_prep(batch):
"""
Takes images scaled to [0, 1] and returns them appropriately scaled and
mean-subtracted for VGG-16
"""
from mxnet import nd
mean = nd.array([123.68, 116.779, 1... |
Create a :class:`StyleTransfer` model.
Parameters
----------
style_dataset: SFrame
Input style images. The columns named by the ``style_feature`` parameters will
be extracted for training the model.
content_dataset : SFrame
Input content images. The columns named by the ``conte... |
Takes input and returns tuple of the input in canonical form (SFrame)
along with an unpack callback function that can be applied to
prediction results to "undo" the canonization.
def _canonize_content_input(self, dataset, single_style):
"""
Takes input and returns tuple of the input in ... |
Stylize an SFrame of Images given a style index or a list of
styles.
Parameters
----------
images : SFrame | Image
A dataset that has the same content image column that was used
during training.
style : int or list, optional
The selected styl... |
Save the model in Core ML format. The Core ML model takes an image of
fixed size, and a style index inputs and produces an output
of an image of fixed size
Parameters
----------
path : string
A string to the path for saving the Core ML model.
image_shape: tu... |
Returns SFrame of style images used for training the model
Parameters
----------
style: int or list, optional
The selected style or list of styles to return. If `None`, all
styles will be returned
See Also
--------
stylize
Examples
... |
Convert an MXNet model to the protobuf spec.
Parameters
----------
model: MXNet model
A trained MXNet neural network model.
input_shape: list of tuples
A list of (name, shape) tuples, defining the input names and their
shapes. The list also serves to define the desired order of... |
Load a libsvm model from a path on disk.
This currently supports:
* C-SVC
* NU-SVC
* Epsilon-SVR
* NU-SVR
Parameters
----------
model_path: str
Path on disk where the libsvm model representation is.
Returns
-------
model: libsvm_model
A model of the... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.