High Performance and GPU Computing

Creating a GPU instance

You can spin up a GPU enabled instance similarly to how you spin up a Jupyter instance in Saturn.


Saturn supports instances with T4 GPUs (g4dn), as well as instances with V100 GPUs (including p3.16xlarge which has 8 V100 GPUs). These instances must be paired with GPU images

GPU Docker images

In order to effectively use a GPU instance, the docker image must be GPU enabled. Otherwise there will be no CUDA drivers available, and you won’t be able to use the GPU card. Saturn ships with a saturn-gpu image, which includes common gpu utilities including rapids, PyTorch and Tensorflow/Keras.

If you want to build your own GPU image, you must choose saturnbase-gpu as your base image in order to pick up NVIDIA CUDA drivers. We are currently using CUDA 10.1. If you need another version of CUDA, please let us know.

Python packages.

The python package you’re using needs built for the GPU, and in some cases for the version of CUDA you are using (in our case, 10.1).


With pip, make sure you install tensorflow-gpu, not tensorflow. With conda, you should specify the precise build you want to use (pick the gpu one). For example

$ conda search tensorflow
tensorflow                     2.2.0 eigen_py36h84d285f_0  pkgs/main
tensorflow                     2.2.0 eigen_py37h1b16bb3_0  pkgs/main
tensorflow                     2.2.0 eigen_py38hfc6e53c_0  pkgs/main
tensorflow                     2.2.0 gpu_py36hf933387_0  pkgs/main
tensorflow                     2.2.0 gpu_py37h1a511ff_0  pkgs/main
tensorflow                     2.2.0 gpu_py38hb782248_0  pkgs/main
tensorflow                     2.2.0 mkl_py36h5a57954_0  pkgs/main
tensorflow                     2.2.0 mkl_py37h6e9ce2d_0  pkgs/main
tensorflow                     2.2.0 mkl_py38h6d3daf0_0  pkgs/main

The MKL and eigen builds are no good here. Pick the appropriate GPU version which matches your Python version. In an environment.yml

  - tensorflow=2.2.0=gpu_py37h1a511ff_0


PyTorch is best installed via the pytorch conda channel, and is built for specific versions of cuda

$ conda search pytorch
pytorch                        1.5.1 py3.7_cpu_0                      pytorch
pytorch                        1.5.1 py3.7_cuda10.1.243_cudnn7.6.3_0  pytorch
pytorch                        1.5.1 py3.7_cuda10.2.89_cudnn7.6.5_0   pytorch
pytorch                        1.5.1 py3.7_cuda9.2.148_cudnn7.6.3_0   pytorch
pytorch                        1.5.1 py3.8_cpu_0                      pytorch
pytorch                        1.5.1 py3.8_cuda10.1.243_cudnn7.6.3_0  pytorch
pytorch                        1.5.1 py3.8_cuda10.2.89_cudnn7.6.5_0   pytorch
pytorch                        1.5.1 py3.8_cuda9.2.148_cudnn7.6.3_0   pytorch

Pick the package for the appropriate Python, and cuda 10.1

  - pytorch
  - defaults
  - pytorch=1.5.1=py3.7_cuda10.1.243_cudnn7.6.3_0


Similar to PyTorch, pick up the conda build from the rapidsai channel matching your cuda and python version.

  - rapidsai
  - defaults
  - rapids=0.14.1=cuda10.1_py37_0