Using PyTorch with GPU in Google Colab

First, WTF is Google Colab?

Google Colab ( is Google's collaborative version of the Jupyter/iPython notebook-based editing environment. They released the tool to the general public with a noble goal of dissemination of machine learning education and research.

You should be excited coz even Chris Olah is excited:

Latest Feature: GPU

Its newest feature is the ability to use a GPU as a backend for free for 12 hours at a time. The details are as follows:

  1. The GPU used in the backend is a K80 (at this moment).
  2. The 12-hour limit is for a continuous assignment of virtual machine (VM).

What it means is that we can use the GPU even after the end of 12 hours by connecting to a different VM.

Activating GPU

  1. You need to signup and apply for access before you can start using Google Colab.
  2. Once you have access, you can either upload your own notebook using File → Upload Notebook or simply enter your codes in the cells.
  3. To enable GPU backend for your notebook, go to Edit → Notebook Settings and set Hardware accelerator to GPU.


Installing PyTorch

Enter these lines of codes into the cells:

!pip3 install 
!pip3 install torchvision

The output should look something like this:


And that's it!

Bonus: PyTorch Feedforward NN with GPU on Colab

Take a look at my Colab Notebook that uses PyTorch to train a feedforward neural network on the MNIST dataset with an accuracy of 98%.

Link to my Colab notebook:

The focus here isn't on the DL/ML part, but the:

  1. Use of Google Colab.
  2. Use of Google Colab's GPU.
  3. Use of PyTorch in Google Colab with GPU.

Here's a preview of the aforementioned notebook:



If you enjoyed this post and want to buy me a cup of coffee...

The thing is, I'll always accept a cup of coffee. So feel free to buy me one.

Cheers! ☕️