text stringlengths 0 1.46k |
|---|
This is a dataset of 25,000 movies reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a list of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd mo... |
As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word. |
Arguments |
path: where to cache the data (relative to ~/.keras/dataset). |
num_words: integer or None. Words are ranked by how often they occur (in the training set) and only the num_words most frequent words are kept. Any less frequent word will appear as oov_char value in the sequence data. If None, all words are kept. Defaults to None, so all words are kept. |
skip_top: skip the top N most frequently occurring words (which may not be informative). These words will appear as oov_char value in the dataset. Defaults to 0, so no words are skipped. |
maxlen: int or None. Maximum sequence length. Any longer sequence will be truncated. Defaults to None, which means no truncation. |
seed: int. Seed for reproducible data shuffling. |
start_char: int. The start of a sequence will be marked with this character. Defaults to 1 because 0 is usually the padding character. |
oov_char: int. The out-of-vocabulary character. Words that were cut out because of the num_words or skip_top limits will be replaced with this character. |
index_from: int. Index actual words with this index and higher. |
**kwargs: Used for backwards compatibility. |
Returns |
Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test). |
x_train, x_test: lists of sequences, which are lists of indexes (integers). If the num_words argument was specific, the maximum possible index value is num_words - 1. If the maxlen argument was specified, the largest possible sequence length is maxlen. |
y_train, y_test: lists of integer labels (1 or 0). |
Raises |
ValueError: in case maxlen is so low that no input sequence could be kept. |
Note that the 'out of vocabulary' character is only used for words that were present in the training set but are not included because they're not making the num_words cut here. Words that were not seen in the training set but are in the test set have simply been skipped. |
get_word_index function |
tf.keras.datasets.imdb.get_word_index(path="imdb_word_index.json") |
Retrieves a dict mapping words to their index in the IMDB dataset. |
Arguments |
path: where to cache the data (relative to ~/.keras/dataset). |
Returns |
The word index dictionary. Keys are word strings, values are their index. |
Example |
# Retrieve the training sequences. |
(x_train, _), _ = keras.datasets.imdb.load_data() |
# Retrieve the word index file mapping words to indices |
word_index = keras.datasets.imdb.get_word_index() |
# Reverse the word index to obtain a dict mapping indices to words |
inverted_word_index = dict((i, word) for (word, i) in word_index.items()) |
# Decode the first sequence in the dataset |
decoded_sequence = " ".join(inverted_word_index[i] for i in x_train[0])CIFAR100 small images classification dataset |
load_data function |
tf.keras.datasets.cifar100.load_data(label_mode="fine") |
Loads the CIFAR100 dataset. |
This is a dataset of 50,000 32x32 color training images and 10,000 test images, labeled over 100 fine-grained classes that are grouped into 20 coarse-grained classes. See more info at the CIFAR homepage. |
Arguments |
label_mode: one of "fine", "coarse". If it is "fine" the category labels are the fine-grained labels, if it is "coarse" the output labels are the coarse-grained superclasses. |
Returns |
Tuple of NumPy arrays: (x_train, y_train), (x_test, y_test). |
x_train: uint8 NumPy array of grayscale image data with shapes (50000, 32, 32, 3), containing the training data. Pixel values range from 0 to 255. |
y_train: uint8 NumPy array of labels (integers in range 0-99) with shape (50000, 1) for the training data. |
x_test: uint8 NumPy array of grayscale image data with shapes (10000, 32, 32, 3), containing the test data. Pixel values range from 0 to 255. |
y_test: uint8 NumPy array of labels (integers in range 0-99) with shape (10000, 1) for the test data. |
Example |
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data() |
assert x_train.shape == (50000, 32, 32, 3) |
assert x_test.shape == (10000, 32, 32, 3) |
assert y_train.shape == (50000, 1) |
assert y_test.shape == (10000, 1) |
ResNet and ResNetV2 |
ResNet50 function |
tf.keras.applications.ResNet50( |
include_top=True, |
weights="imagenet", |
input_tensor=None, |
input_shape=None, |
pooling=None, |
classes=1000, |
**kwargs |
) |
Instantiates the ResNet50 architecture. |
Reference |
Deep Residual Learning for Image Recognition (CVPR 2015) |
For image classification use cases, see this page for detailed examples. |
For transfer learning use cases, make sure to read the guide to transfer learning & fine-tuning. |
Note: each Keras Application expects a specific kind of input preprocessing. For ResNet, call tf.keras.applications.resnet.preprocess_input on your inputs before passing them to the model. resnet.preprocess_input will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the... |
Arguments |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.