Datasets within Keras

One of the common problems in deep learning (or machine learning in general) is finding the right dataset to test and build predictive models.

Fortunately, the keras.datasets module already includes methods to load and fetch popular reference datasets.

Here's the list of available datasets:

  1. Boston Housing (regression)
  2. CIFAR10 (classification of 10 image labels)
  3. CIFAR100 (classification of 100 image labels)
  4. MNIST (classification of 10 digits)
  5. Fashion-MNIST (classification of 10 fashion categories)
  6. IMDB Movie Reviews (binary text classification)
  7. Reuters News (multiclass text classification)

But first, here are some mandatory imports and helper functions that will aid us in understanding the datasets:

%matplotlib inline
import matplotlib.pyplot as plt
import keras
from termcolor import colored

def show_shapes(x_train, y_train, x_test, y_test, color='green'):
    print(colored('Training shape:', color, attrs=['bold']))
    print('  x_train.shape:', x_train.shape)
    print('  y_train.shape:', y_train.shape)
    print(colored('\nTesting shape:', color, attrs=['bold']))
    print('  x_test.shape:', x_test.shape)
    print('  y_test.shape:', y_test.shape)

def plot_data(my_data, cmap=None):
    plt.axis('off')
    fig = plt.imshow(my_data, cmap=cmap)
    fig.axes.get_xaxis().set_visible(False)
    fig.axes.get_yaxis().set_visible(False)
    print(fig)

def show_sample(x_train, y_train, idx=0, color='blue'):
    print(colored('x_train sample:', color, attrs=['bold']))
    print(x_train[idx])
    print(colored('\ny_train sample:', color, attrs=['bold']))
    print(y_train[idx])
    
def show_sample_image(x_train, y_train, idx=0, color='blue', cmap=None):
    print(colored('Label:', color, attrs=['bold']), y_train[idx])
    print(colored('Shape:', color, attrs=['bold']), x_train[idx].shape)
    print()
    plot_data(x_train[idx], cmap=cmap)

1. Boston Housing (for regression problems)

This dataset contains 13 attributes of houses at different locations around the Boston suburbs in the late 1970s. Targets are the median values of the houses at a location (in k$).

Code to Load Dataset:

from keras.datasets import boston_housing

(x_train, y_train), (x_test, y_test) = boston_housing.load_data()

Note: load_data() returns two tuples of Numpy arrays. The first tuple represents the training x-y pairs while the second tuple represents the testing x-y pairs.

Deploy Helper Functions to Understand Dataset:

show_shapes(x_train, y_train, x_test, y_test)

print('\n******************************\n')

show_sample(x_train, y_train)

Output:

Training shape:
  x_train.shape: (404, 13)
  y_train.shape: (404,)

Testing shape:
  x_test.shape: (102, 13)
  y_test.shape: (102,)

******************************

x_train sample:
[   1.23247    0.         8.14       0.         0.538      6.142     91.7
    3.9769     4.       307.        21.       396.9       18.72   ]

y_train sample:
15.2

Jump to:


2. CIFAR10 (classification of 10 image labels)

Dataset of 50,000 32x32 color training images, labeled over 10 categories, and 10,000 test images.

Returns 2 types data:

  1. x_train and x_test
    • uint8 array of RGB image data with shape (num_samples, 32, 32, 3).
  2. y_train and y_test
    • uint8 array of category labels (integers in range 0-9) with shape (num_samples, 1).

Code:

from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

show_shapes(x_train, y_train, x_test, y_test)
print('\n******************************\n')
show_sample_image(x_train, y_train)

Output:

Training shape:
  x_train.shape: (50000, 32, 32, 3)
  y_train.shape: (50000, 1)

Testing shape:
  x_test.shape: (10000, 32, 32, 3)
  y_test.shape: (10000, 1)

******************************

Label: [6]
Shape: (32, 32, 3)

AxesImage(54,36;334.8x217.44)

cifar10

Jump to:


3. CIFAR100 (classification of 100 image labels)

Dataset of 50,000 32x32 color training images, labeled over 100 categories, and 10,000 test images.

Returns 2 types data:

  1. x_train and x_test
    • uint8 array of RGB image data with shape (num_samples, 32, 32, 3).
  2. y_train and y_test
    • uint8 array of category labels (integers in range 0-99) with shape (num_samples, 1).

Code:

from keras.datasets import cifar100
(x_train, y_train), (x_test, y_test) = cifar100.load_data(label_mode='fine')

show_shapes(x_train, y_train, x_test, y_test)
print('\n******************************\n')
show_sample_image(x_train, y_train)

Output:

Training shape:
  x_train.shape: (50000, 32, 32, 3)
  y_train.shape: (50000, 1)

Testing shape:
  x_test.shape: (10000, 32, 32, 3)
  y_test.shape: (10000, 1)

******************************

Label: [19]
Shape: (32, 32, 3)

AxesImage(54,36;334.8x217.44)

cifar100

Jump to:


4. MNIST (classification of 10 digits)

Dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images.

Returns 2 types data:

  1. x_train and x_test
  2. y_train and y_test
    • uint8 array of category labels (integers in range 0-9) with shape (num_samples,).

Code:

from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

show_shapes(x_train, y_train, x_test, y_test)
print('\n******************************\n')
show_sample_image(x_train, y_train, cmap='gray')

Output:

Training shape:
  x_train.shape: (60000, 28, 28)
  y_train.shape: (60000,)

Testing shape:
  x_test.shape: (10000, 28, 28)
  y_test.shape: (10000,)

******************************

Label: 5
Shape: (28, 28)

AxesImage(54,36;334.8x217.44)

mnist

Jump to:


5. Fashion-MNIST (classification of 10 fashion categories)

Dataset of 60,000 28x28 grayscale images of 10 fashion categories, along with a test set of 10,000 images. This dataset can be used as a drop-in replacement for MNIST. The class labels are:

Label Description
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot

Returns 2 types data:

  1. x_train and x_test
  2. y_train and y_test
    • uint8 array of category labels (integers in range 0-9) with shape (num_samples,).

Code:

from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

show_shapes(x_train, y_train, x_test, y_test)
print('\n******************************\n')
show_sample_image(x_train, y_train, cmap='gray')

Output:

Training shape:
  x_train.shape: (60000, 28, 28)
  y_train.shape: (60000,)

Testing shape:
  x_test.shape: (10000, 28, 28)
  y_test.shape: (10000,)

******************************

Label: 9
Shape: (28, 28)

AxesImage(54,36;334.8x217.44)

fashion-mnist

Jump to:


6. IMDB Movie Reviews (binary text classification)

Dataset of 25,000 movie reviews from IMDB, labeled by sentiment (positive/negative). Reviews have been preprocessed, and each review is encoded as a sequence of word indexes (integers). For convenience, words are indexed by overall frequency in the dataset, so that for instance the integer "3" encodes the 3rd most frequent word in the data. This allows for quick filtering operations such as: "only consider the top 10,000 most common words, but eliminate the top 20 most common words".

As a convention, "0" does not stand for a specific word, but instead is used to encode any unknown word.

Returns 2 types data:

  • x_train and x_test
    • list of sequences, which are lists of indexes (integers).
  • y_train and y_test
    • list of integer labels (1 or 0).

Code:

from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data()

show_shapes(x_train, y_train, x_test, y_test)
print('\n******************************\n')
show_sample(x_train, y_train, idx=123)

Output:

Training shape:
  x_train.shape: (25000,)
  y_train.shape: (25000,)

Testing shape:
  x_test.shape: (25000,)
  y_test.shape: (25000,)

******************************

x_train sample:
[1, 307, 5, 1301, 20, 1026, 2511, 87, 2775, 52, 116, 5, 31, 7, 4, 91, 1220, 102, 13, 28, 110, 11, 6, 137, 13, 115, 219, 141, 35, 221, 956, 54, 13, 16, 11, 2714, 61, 322, 423, 12, 38, 76, 59, 1803, 72, 8, 10508, 23, 5, 967, 12, 38, 85, 62, 358, 99]

y_train sample:
1

Jump to:


7. Reuters News (multiclass text classification)

Dataset of 11,228 newswires from Reuters, labeled over 46 topics. As with the IMDB dataset, each wire is encoded as a sequence of word indexes (same conventions).

Returns 2 types data:

  • x_train and x_test
    • list of sequences, which are lists of indexes (integers).
  • y_train and y_test
    • list of integer labels (0 to 45).

Code:

from keras.datasets import reuters
(x_train, y_train), (x_test, y_test) = reuters.load_data()

show_shapes(x_train, y_train, x_test, y_test)
print('\n******************************\n')
show_sample(x_train, y_train, idx=1)

Output:

Training shape:
  x_train.shape: (8982,)
  y_train.shape: (8982,)

Testing shape:
  x_test.shape: (2246,)
  y_test.shape: (2246,)

******************************

x_train sample:
[1, 3267, 699, 3434, 2295, 56, 16784, 7511, 9, 56, 3906, 1073, 81, 5, 1198, 57, 366, 737, 132, 20, 4093, 7, 19261, 49, 2295, 13415, 1037, 3267, 699, 3434, 8, 7, 10, 241, 16, 855, 129, 231, 783, 5, 4, 587, 2295, 13415, 30625, 775, 7, 48, 34, 191, 44, 35, 1795, 505, 17, 12]

y_train sample:
4

Jump to:


If you enjoyed this post and want to buy me a cup of coffee...

The thing is, I'll always accept a cup of coffee. So feel free to buy me one.

Cheers! ☕️