text stringlengths 81 112k |
|---|
Infer output shapes and return dictionary of output name to shape
:param :class:`~mxnet.symbol.Symbol` sym: symbol to perform infer shape on
:param dic of (str, nd.NDArray) params:
:param list of tuple(int, ...) in_shape: list of all input shapes
:param in_label: name of label typicall... |
Convert weights to numpy
def convert_weights_to_numpy(weights_dict):
"""Convert weights to numpy"""
return dict([(k.replace("arg:", "").replace("aux:", ""), v.asnumpy())
for k, v in weights_dict.items()]) |
Convert MXNet graph to ONNX graph
Parameters
----------
sym : :class:`~mxnet.symbol.Symbol`
MXNet symbol object
params : dict of ``str`` to :class:`~mxnet.ndarray.NDArray`
Dict of converted parameters stored in ``mxnet.ndarray.NDArray`` format
in_shape : ... |
Compute learning rate and refactor scheduler
Parameters:
---------
learning_rate : float
original learning rate
lr_refactor_step : comma separated str
epochs to change learning rate
lr_refactor_ratio : float
lr *= ratio at certain steps
num_example : int
number o... |
Wrapper for training phase.
Parameters:
----------
net : str
symbol name for the network structure
train_path : str
record file path for training
num_classes : int
number of object classes, not including background
batch_size : int
training batch-size
data_sh... |
This is a set of 50 images representative of ImageNet images.
This dataset was collected by randomly finding a working ImageNet link and then pasting the
original ImageNet image into Google image search restricted to images licensed for reuse. A
similar image (now with rights to reuse) was downloaded as a ... |
Return the boston housing data in a nice package.
def boston(display=False):
""" Return the boston housing data in a nice package. """
d = sklearn.datasets.load_boston()
df = pd.DataFrame(data=d.data, columns=d.feature_names) # pylint: disable=E1101
return df, d.target |
Return the clssic IMDB sentiment analysis training data in a nice package.
Full data is at: http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Paper to cite when using the data is: http://www.aclweb.org/anthology/P11-1015
def imdb(display=False):
""" Return the clssic IMDB sentiment analysis t... |
Predict total number of non-violent crimes per 100K popuation.
This dataset is from the classic UCI Machine Learning repository:
https://archive.ics.uci.edu/ml/datasets/Communities+and+Crime+Unnormalized
def communitiesandcrime(display=False):
""" Predict total number of non-violent crimes per 100K popuat... |
Return the diabetes data in a nice package.
def diabetes(display=False):
""" Return the diabetes data in a nice package. """
d = sklearn.datasets.load_diabetes()
df = pd.DataFrame(data=d.data, columns=d.feature_names) # pylint: disable=E1101
return df, d.target |
Return the classic iris data in a nice package.
def iris(display=False):
""" Return the classic iris data in a nice package. """
d = sklearn.datasets.load_iris()
df = pd.DataFrame(data=d.data, columns=d.feature_names) # pylint: disable=E1101
if display:
return df, [d.target_names[v] for v in d... |
Return the Adult census data in a nice package.
def adult(display=False):
""" Return the Adult census data in a nice package. """
dtypes = [
("Age", "float32"), ("Workclass", "category"), ("fnlwgt", "float32"),
("Education", "category"), ("Education-Num", "float32"), ("Marital Status", "categor... |
A nicely packaged version of NHANES I data with surivival times as labels.
def nhanesi(display=False):
""" A nicely packaged version of NHANES I data with surivival times as labels.
"""
X = pd.read_csv(cache(github_data_url + "NHANESI_subset_X.csv"))
y = pd.read_csv(cache(github_data_url + "NHANESI_sub... |
A nicely packaged version of CRIC data with progression to ESRD within 4 years as the label.
def cric(display=False):
""" A nicely packaged version of CRIC data with progression to ESRD within 4 years as the label.
"""
X = pd.read_csv(cache(github_data_url + "CRIC_time_4yearESRD_X.csv"))
y = np.loadtxt... |
Correlated Groups 60
A simulated dataset with tight correlations among distinct groups of features.
def corrgroups60(display=False):
""" Correlated Groups 60
A simulated dataset with tight correlations among distinct groups of features.
"""
# set a constant seed
old_seed = np.random.... |
A simulated dataset with tight correlations among distinct groups of features.
def independentlinear60(display=False):
""" A simulated dataset with tight correlations among distinct groups of features.
"""
# set a constant seed
old_seed = np.random.seed()
np.random.seed(0)
# generate dataset ... |
Ranking datasets from lightgbm repository.
def rank():
""" Ranking datasets from lightgbm repository.
"""
rank_data_url = 'https://raw.githubusercontent.com/Microsoft/LightGBM/master/examples/lambdarank/'
x_train, y_train = sklearn.datasets.load_svmlight_file(cache(rank_data_url + 'rank.train'))
x_... |
An approximation of holdout that only retraines the model once.
This is alse called ROAR (RemOve And Retrain) in work by Google. It is much more computationally
efficient that the holdout method because it masks the most important features in every sample
and then retrains the model once, instead of retrai... |
The model is retrained for each test sample with the non-important features set to a constant.
If you want to know how important a set of features is you can ask how the model would be
different if only those features had existed. To determine this we can mask the other features
across the entire training ... |
The model is revaluated for each test sample with the non-important features set to their mean.
def keep_mask(nkeep, X_train, y_train, X_test, y_test, attr_test, model_generator, metric, trained_model, random_state):
""" The model is revaluated for each test sample with the non-important features set to their mean... |
The model is revaluated for each test sample with the non-important features set to an imputed value.
Note that the imputation is done using a multivariate normality assumption on the dataset. This depends on
being able to estimate the full data covariance matrix (and inverse) accuractly. So X_train.shape[0] s... |
The model is revaluated for each test sample with the non-important features set to resample background values.
def keep_resample(nkeep, X_train, y_train, X_test, y_test, attr_test, model_generator, metric, trained_model, random_state):
""" The model is revaluated for each test sample with the non-important featur... |
The how well do the features plus a constant base rate sum up to the model output.
def local_accuracy(X_train, y_train, X_test, y_test, attr_test, model_generator, metric, trained_model):
""" The how well do the features plus a constant base rate sum up to the model output.
"""
X_train, X_test = to_array(... |
Generate a random array with a fixed seed.
def const_rand(size, seed=23980):
""" Generate a random array with a fixed seed.
"""
old_seed = np.random.seed()
np.random.seed(seed)
out = np.random.rand(size)
np.random.seed(old_seed)
return out |
Shuffle an array in-place with a fixed seed.
def const_shuffle(arr, seed=23980):
""" Shuffle an array in-place with a fixed seed.
"""
old_seed = np.random.seed()
np.random.seed(seed)
np.random.shuffle(arr)
np.random.seed(old_seed) |
Estimate the SHAP values for a set of samples.
Parameters
----------
X : numpy.array or pandas.DataFrame
A matrix of samples (# samples x # features) on which to explain the model's output.
Returns
-------
For a models with a single output this returns a mat... |
Plots SHAP values for image inputs.
def image_plot(shap_values, x, labels=None, show=True, width=20, aspect=0.2, hspace=0.2, labelpad=None):
""" Plots SHAP values for image inputs.
"""
multi_output = True
if type(shap_values) != list:
multi_output = False
shap_values = [shap_values]
... |
A leaf ordering is under-defined, this picks the ordering that keeps nearby samples similar.
def hclust_ordering(X, metric="sqeuclidean"):
""" A leaf ordering is under-defined, this picks the ordering that keeps nearby samples similar.
"""
# compute a hierarchical clustering
D = sp.spatial.distanc... |
Order other features by how much interaction they seem to have with the feature at the given index.
This just bins the SHAP values for a feature along that feature's value. For true Shapley interaction
index values for SHAP see the interaction_contribs option implemented in XGBoost.
def approximate_interactio... |
Converts human agreement differences to numerical scores for coloring.
def _human_score_map(human_consensus, methods_attrs):
""" Converts human agreement differences to numerical scores for coloring.
"""
v = 1 - min(np.sum(np.abs(methods_attrs - human_consensus)) / (np.abs(human_consensus).sum() + 1), 1.0... |
Draw the bars and separators.
def draw_bars(out_value, features, feature_type, width_separators, width_bar):
"""Draw the bars and separators."""
rectangle_list = []
separator_list = []
pre_val = out_value
for index, features in zip(range(len(features)), features):
if feature_type == 'p... |
Format data.
def format_data(data):
"""Format data."""
# Format negative features
neg_features = np.array([[data['features'][x]['effect'],
data['features'][x]['value'],
data['featureNames'][x]]
for x in data['features'... |
Draw additive plot.
def draw_additive_plot(data, figsize, show, text_rotation=0):
"""Draw additive plot."""
# Turn off interactive plot
if show == False:
plt.ioff()
# Format data
neg_features, total_neg, pos_features, total_pos = format_data(data)
# Compute overall metrics
... |
Fails gracefully when various install steps don't work.
def try_run_setup(**kwargs):
""" Fails gracefully when various install steps don't work.
"""
try:
run_setup(**kwargs)
except Exception as e:
print(str(e))
if "xgboost" in str(e).lower():
kwargs["test_xgboost"] ... |
The backward hook which computes the deeplift
gradient for an nn.Module
def deeplift_grad(module, grad_input, grad_output):
"""The backward hook which computes the deeplift
gradient for an nn.Module
"""
# first, get the module type
module_type = module.__class__.__name__
# first, check the ... |
The forward hook used to save interim tensors, detached
from the graph. Used to calculate the multipliers
def add_interim_values(module, input, output):
"""The forward hook used to save interim tensors, detached
from the graph. Used to calculate the multipliers
"""
try:
del module.x
exc... |
A forward hook which saves the tensor - attached to its graph.
Used if we want to explain the interim outputs of a model
def get_target_input(module, input, output):
"""A forward hook which saves the tensor - attached to its graph.
Used if we want to explain the interim outputs of a model
"""
try:
... |
Add handles to all non-container layers in the model.
Recursively for non-container layers
def add_handles(self, model, forward_handle, backward_handle):
"""
Add handles to all non-container layers in the model.
Recursively for non-container layers
"""
handles_list = []
... |
Removes the x and y attributes which were added by the forward handles
Recursively searches for non-container layers
def remove_attributes(self, model):
"""
Removes the x and y attributes which were added by the forward handles
Recursively searches for non-container layers
"""
... |
This gets a JSON dump of an XGBoost model while ensuring the features names are their indexes.
def get_xgboost_json(model):
""" This gets a JSON dump of an XGBoost model while ensuring the features names are their indexes.
"""
fnames = model.feature_names
model.feature_names = None
json_trees = mod... |
This computes the expected value conditioned on the given label value.
def __dynamic_expected_value(self, y):
""" This computes the expected value conditioned on the given label value.
"""
return self.model.predict(self.data, np.ones(self.data.shape[0]) * y, output=self.model_output).mean(0) |
Estimate the SHAP values for a set of samples.
Parameters
----------
X : numpy.array, pandas.DataFrame or catboost.Pool (for catboost)
A matrix of samples (# samples x # features) on which to explain the model's output.
y : numpy.array
An array of label values f... |
Estimate the SHAP interaction values for a set of samples.
Parameters
----------
X : numpy.array, pandas.DataFrame or catboost.Pool (for catboost)
A matrix of samples (# samples x # features) on which to explain the model's output.
y : numpy.array
An array of la... |
A consistent interface to make predictions from this model.
def get_transform(self, model_output):
""" A consistent interface to make predictions from this model.
"""
if model_output == "margin":
transform = "identity"
elif model_output == "probability":
if self.... |
A consistent interface to make predictions from this model.
Parameters
----------
tree_limit : None (default) or int
Limit the number of trees used by the model. By default None means no use the limit of the
original model, and -1 means no limit.
def predict(self, X, y... |
Return the values for the model applied to X.
Parameters
----------
X : list,
if framework == 'tensorflow': numpy.array, or pandas.DataFrame
if framework == 'pytorch': torch.tensor
A tensor (or list of tensors) of samples (where X.shape[0] == # samples) on wh... |
Visualize the given SHAP values with an additive force layout.
Parameters
----------
base_value : float
This is the reference value that the feature contributions start from. For SHAP values it should
be the value of explainer.expected_value.
shap_values : numpy.array
Matri... |
Save html plots to an output file.
def save_html(out_file, plot_html):
""" Save html plots to an output file.
"""
internal_open = False
if type(out_file) == str:
out_file = open(out_file, "w")
internal_open = True
out_file.write("<html><head><script>\n")
# dump the js code
... |
Follows a set of ops assuming their value is False and find blocked Switch paths.
This is used to prune away parts of the model graph that are only used during the training
phase (like dropout, batch norm, etc.).
def tensors_blocked_by_false(ops):
""" Follows a set of ops assuming their value is False and... |
Just decompose softmax into its components and recurse, we can handle all of them :)
We assume the 'axis' is the last dimension because the TF codebase swaps the 'axis' to
the last dimension before the softmax op if 'axis' is not already the last dimension.
We also don't subtract the max before tf.exp for ... |
Return which inputs of this operation are variable (i.e. depend on the model inputs).
def _variable_inputs(self, op):
""" Return which inputs of this operation are variable (i.e. depend on the model inputs).
"""
if op.name not in self._vinputs:
self._vinputs[op.name] = np.array([t.o... |
Get the SHAP value computation graph for a given model output.
def phi_symbolic(self, i):
""" Get the SHAP value computation graph for a given model output.
"""
if self.phi_symbolics[i] is None:
# replace the gradients for all the non-linear activations
# we do this by ... |
Runs the model while also setting the learning phase flags to False.
def run(self, out, model_inputs, X):
""" Runs the model while also setting the learning phase flags to False.
"""
feed_dict = dict(zip(model_inputs, X))
for t in self.learning_phase_flags:
feed_dict[t] = Fa... |
Passes a gradient op creation request to the correct handler.
def custom_grad(self, op, *grads):
""" Passes a gradient op creation request to the correct handler.
"""
return op_handlers[op.type](self, op, *grads) |
Use ssh to run the experiments on remote machines in parallel.
Parameters
----------
experiments : iterable
Output of shap.benchmark.experiments(...).
thread_hosts : list of strings
Each host has the format "host_name:path_to_python_binary" and can appear multiple times
in the ... |
Create a SHAP monitoring plot.
(Note this function is preliminary and subject to change!!)
A SHAP monitoring plot is meant to display the behavior of a model
over time. Often the shap_values given to this plot explain the loss
of a model, so changes in a feature's impact on the model's loss over
... |
Summarize a dataset with k mean samples weighted by the number of data points they
each represent.
Parameters
----------
X : numpy.array or pandas.DataFrame
Matrix of data samples to summarize (# samples x # features)
k : int
Number of means to use for approximation.
round_val... |
Estimate the SHAP values for a set of samples.
Parameters
----------
X : numpy.array or pandas.DataFrame or any scipy.sparse matrix
A matrix of samples (# samples x # features) on which to explain the model's output.
nsamples : "auto" or int
Number of times to r... |
Use the SHAP values as an embedding which we project to 2D for visualization.
Parameters
----------
ind : int or string
If this is an int it is the index of the feature to use to color the embedding.
If this is a string it is either the name of the feature, or it can have the
form "... |
Create a SHAP dependence plot, colored by an interaction feature.
Plots the value of the feature on the x-axis and the SHAP value of the same feature
on the y-axis. This shows how the model depends on the given feature, and is like a
richer extenstion of the classical parital dependence plots. Vertical dis... |
Runtime
transform = "negate"
sort_order = 1
def runtime(X, y, model_generator, method_name):
""" Runtime
transform = "negate"
sort_order = 1
"""
old_seed = np.random.seed()
np.random.seed(3293)
# average the method scores over several train/test splits
method_reps = []
for... |
Local Accuracy
transform = "identity"
sort_order = 2
def local_accuracy(X, y, model_generator, method_name):
""" Local Accuracy
transform = "identity"
sort_order = 2
"""
def score_map(true, pred):
""" Converts local accuracy from % of standard deviation to numerical scores for colo... |
Keep Negative (mask)
xlabel = "Max fraction of features kept"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 5
def keep_negative_mask(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Negative (mask)
xlabel = "Max fraction of features kept"
ylabel = "Neg... |
Keep Absolute (mask)
xlabel = "Max fraction of features kept"
ylabel = "R^2"
transform = "identity"
sort_order = 6
def keep_absolute_mask__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Absolute (mask)
xlabel = "Max fraction of features kept"
ylabel = "R^2"
transform =... |
Remove Positive (mask)
xlabel = "Max fraction of features removed"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 7
def remove_positive_mask(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Positive (mask)
xlabel = "Max fraction of features removed"
y... |
Remove Absolute (mask)
xlabel = "Max fraction of features removed"
ylabel = "1 - R^2"
transform = "one_minus"
sort_order = 9
def remove_absolute_mask__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Absolute (mask)
xlabel = "Max fraction of features removed"
ylabel = "1 -... |
Keep Negative (resample)
xlabel = "Max fraction of features kept"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 11
def keep_negative_resample(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Negative (resample)
xlabel = "Max fraction of features kept"
... |
Keep Absolute (resample)
xlabel = "Max fraction of features kept"
ylabel = "R^2"
transform = "identity"
sort_order = 12
def keep_absolute_resample__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Absolute (resample)
xlabel = "Max fraction of features kept"
ylabel = "R^2"
... |
Keep Absolute (resample)
xlabel = "Max fraction of features kept"
ylabel = "ROC AUC"
transform = "identity"
sort_order = 12
def keep_absolute_resample__roc_auc(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Absolute (resample)
xlabel = "Max fraction of features kept"
ylabel =... |
Remove Positive (resample)
xlabel = "Max fraction of features removed"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 13
def remove_positive_resample(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Positive (resample)
xlabel = "Max fraction of features r... |
Remove Absolute (resample)
xlabel = "Max fraction of features removed"
ylabel = "1 - R^2"
transform = "one_minus"
sort_order = 15
def remove_absolute_resample__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Absolute (resample)
xlabel = "Max fraction of features removed"
... |
Remove Absolute (resample)
xlabel = "Max fraction of features removed"
ylabel = "1 - ROC AUC"
transform = "one_minus"
sort_order = 15
def remove_absolute_resample__roc_auc(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Absolute (resample)
xlabel = "Max fraction of features remo... |
Keep Negative (impute)
xlabel = "Max fraction of features kept"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 17
def keep_negative_impute(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Negative (impute)
xlabel = "Max fraction of features kept"
ylabel... |
Keep Absolute (impute)
xlabel = "Max fraction of features kept"
ylabel = "R^2"
transform = "identity"
sort_order = 18
def keep_absolute_impute__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Absolute (impute)
xlabel = "Max fraction of features kept"
ylabel = "R^2"
tran... |
Keep Absolute (impute)
xlabel = "Max fraction of features kept"
ylabel = "ROC AUC"
transform = "identity"
sort_order = 19
def keep_absolute_impute__roc_auc(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Absolute (impute)
xlabel = "Max fraction of features kept"
ylabel = "ROC ... |
Remove Positive (impute)
xlabel = "Max fraction of features removed"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 7
def remove_positive_impute(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Positive (impute)
xlabel = "Max fraction of features removed"... |
Remove Absolute (impute)
xlabel = "Max fraction of features removed"
ylabel = "1 - R^2"
transform = "one_minus"
sort_order = 9
def remove_absolute_impute__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Absolute (impute)
xlabel = "Max fraction of features removed"
ylabel ... |
Remove Absolute (impute)
xlabel = "Max fraction of features removed"
ylabel = "1 - ROC AUC"
transform = "one_minus"
sort_order = 9
def remove_absolute_impute__roc_auc(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Absolute (impute)
xlabel = "Max fraction of features removed"
... |
Keep Negative (retrain)
xlabel = "Max fraction of features kept"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 7
def keep_negative_retrain(X, y, model_generator, method_name, num_fcounts=11):
""" Keep Negative (retrain)
xlabel = "Max fraction of features kept"
ylab... |
Remove Positive (retrain)
xlabel = "Max fraction of features removed"
ylabel = "Negative mean model output"
transform = "negate"
sort_order = 11
def remove_positive_retrain(X, y, model_generator, method_name, num_fcounts=11):
""" Remove Positive (retrain)
xlabel = "Max fraction of features remo... |
Batch Remove Absolute (retrain)
xlabel = "Fraction of features removed"
ylabel = "1 - R^2"
transform = "one_minus"
sort_order = 13
def batch_remove_absolute_retrain__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Batch Remove Absolute (retrain)
xlabel = "Fraction of features remove... |
Batch Keep Absolute (retrain)
xlabel = "Fraction of features kept"
ylabel = "R^2"
transform = "identity"
sort_order = 13
def batch_keep_absolute_retrain__r2(X, y, model_generator, method_name, num_fcounts=11):
""" Batch Keep Absolute (retrain)
xlabel = "Fraction of features kept"
ylabel = "... |
Batch Remove Absolute (retrain)
xlabel = "Fraction of features removed"
ylabel = "1 - ROC AUC"
transform = "one_minus"
sort_order = 13
def batch_remove_absolute_retrain__roc_auc(X, y, model_generator, method_name, num_fcounts=11):
""" Batch Remove Absolute (retrain)
xlabel = "Fraction of featur... |
Batch Keep Absolute (retrain)
xlabel = "Fraction of features kept"
ylabel = "ROC AUC"
transform = "identity"
sort_order = 13
def batch_keep_absolute_retrain__roc_auc(X, y, model_generator, method_name, num_fcounts=11):
""" Batch Keep Absolute (retrain)
xlabel = "Fraction of features kept"
y... |
Test an explanation method.
def __score_method(X, y, fcounts, model_generator, score_function, method_name, nreps=10, test_size=100, cache_dir="/tmp"):
""" Test an explanation method.
"""
old_seed = np.random.seed()
np.random.seed(3293)
# average the method scores over several train/test splits
... |
AND (false/false)
This tests how well a feature attribution method agrees with human intuition
for an AND operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
... |
AND (false/true)
This tests how well a feature attribution method agrees with human intuition
for an AND operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
i... |
AND (true/true)
This tests how well a feature attribution method agrees with human intuition
for an AND operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if... |
OR (false/false)
This tests how well a feature attribution method agrees with human intuition
for an OR operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if... |
OR (false/true)
This tests how well a feature attribution method agrees with human intuition
for an OR operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if ... |
OR (true/true)
This tests how well a feature attribution method agrees with human intuition
for an OR operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if c... |
XOR (false/false)
This tests how well a feature attribution method agrees with human intuition
for an eXclusive OR operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 po... |
XOR (false/true)
This tests how well a feature attribution method agrees with human intuition
for an eXclusive OR operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 poi... |
XOR (true/true)
This tests how well a feature attribution method agrees with human intuition
for an eXclusive OR operation combined with linear effects. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 poin... |
SUM (false/false)
This tests how well a feature attribution method agrees with human intuition
for a SUM operation. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if cough: +2 points
trans... |
SUM (false/true)
This tests how well a feature attribution method agrees with human intuition
for a SUM operation. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if cough: +2 points
transf... |
SUM (true/true)
This tests how well a feature attribution method agrees with human intuition
for a SUM operation. This metric deals
specifically with the question of credit allocation for the following function
when all three inputs are true:
if fever: +2 points
if cough: +2 points
transfo... |
Uses block matrix inversion identities to quickly estimate transforms.
After a bit of matrix math we can isolate a transform matrix (# features x # features)
that is independent of any sample we are explaining. It is the result of averaging over
all feature permutations, but we just use a fixed... |
Estimate the SHAP values for a set of samples.
Parameters
----------
X : numpy.array or pandas.DataFrame
A matrix of samples (# samples x # features) on which to explain the model's output.
Returns
-------
For models with a single output this returns a matri... |
4-Layer Neural Network
def independentlinear60__ffnn():
""" 4-Layer Neural Network
"""
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(32, activation='relu', input_dim=60))
model.add(Dense(20, activation='relu'))
model.add(Dense(2... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.