1. Create your cloud resources

BinderHub is built to run on top of Kubernetes, a distributed cluster manager. It uses a JupyterHub to launch/manage user servers, as well as a docker registry to cache images.

To create your own BinderHub, you’ll first need to set up a properly configured Kubernetes Cluster on the cloud, and then configure the various components correctly. The following instructions will assist you in doing so.

1.1. Setting up Kubernetes on Google Cloud


BinderHub is built to be cloud agnostic, and can run on various cloud providers (as well as bare metal). However, here we only provide instructions for Google Cloud as it has been the most extensively-tested. If you would like to help with adding instructions for other cloud providers, please contact us!

Google Kubernetes Engine (GKE) is the simplest and most common way of setting up a Kubernetes Cluster. You may be able to receive free credits for trying it out. You will need to connect your credit card or other payment method to your google cloud account.

  1. Go to https://console.cloud.google.com and log in.

  2. Enable the Kubernetes Engine API.

  3. Use your preferred command line interface.

    You have two options: a) use the Google Cloud Shell (no installation needed) or b) install and use the gcloud command-line tool. If you are unsure which to choose, we recommend beginning with option “a” and using the Google Cloud Shell. Instructions for each are detailed below:

    1. Use the Google Cloud Shell. Start the Google Cloud Shell

    by clicking the button shown below. This will start an interactive shell session within Google Cloud.


    See the Google Cloud Shell docs for more information.

    • Install and use the gcloud command line tool. This tool sends commands to Google Cloud and lets you do things like create and delete clusters.
  4. Install kubectl, which is a tool for controlling kubernetes. From the terminal, enter:

    gcloud components install kubectl
  5. Create a Kubernetes cluster on Google Cloud, by typing the following command into either the Google Cloud shell or the gcloud command-line tool:

    gcloud container clusters create <YOUR-CLUSTER> \
        --num-nodes=3 \
        --machine-type=n1-standard-2 \


    • --num-nodes specifies how many computers to spin up. The higher the number, the greater the cost.
    • --machine-type specifies the amount of CPU and RAM in each node. There is a variety of types to choose from. Picking something appropriate here will have a large effect on how much you pay - smaller machines restrict the max amount of RAM each user can have access to but allow more fine-grained scaling, reducing cost. The default (n1-standard-2) has 2CPUs and 7.5G of RAM each, and might not be a good fit for all use cases!
    • --zone specifies which data center to use. Pick something that is not too far away from your users. You can find a list of them here.
  6. To test if your cluster is initialized, run:

    kubectl get node

    The response should list three running nodes.

  7. Give your account super-user permissions, allowing you to perform all the actions needed to set up JupyterHub.

    kubectl create clusterrolebinding cluster-admin-binding \
        --clusterrole=cluster-admin \

1.2. Install Helm

Helm, the package manager for Kubernetes, is a useful tool to install, upgrade and manage applications on a Kubernetes cluster. We will be using Helm to install and manage JupyterHub on our cluster.

1.2.1. Installation

The simplest way to install helm is to run Helm’s installer script at a terminal:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

Alternative methods for helm installation exist if you prefer to install without using the script.

1.2.2. Initialization

After installing helm on your machine, initialize helm on your Kubernetes cluster. At the terminal, enter:

  1. Set up a ServiceAccount for use by Tiller, the server side component of helm.

    kubectl --namespace kube-system create serviceaccount tiller

    Azure AKS: If you’re on Azure AKS, you should now skip directly to step 3.**

  2. Give the ServiceAccount RBAC full permissions to manage the cluser.

    While most clusters have RBAC enabled and you need this line, you must skip this step if your kubernetes cluster does not have RBAC enabled (for example, if you are using Azure AKS).

    kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
  3. Set up Helm on the cluster.

    helm init --service-account tiller

This command only needs to run once per Kubernetes cluster.

1.2.3. Verify

You can verify that you have the correct version and that it installed properly by running:

helm version

It should provide output like:

Client: &version.Version{SemVer:"v2.8.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}

Make sure you have at least version 2.8.1!

If you receive an error that the Server is unreachable, do another helm version in 15-30 seconds, and it should display the Server version.

1.2.4. Secure Helm

Ensure that tiller is secure from access inside the cluster:

kubectl --namespace=kube-system patch deployment tiller-deploy --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'

1.3. Set up the container registry

BinderHub will build Docker images out of GitHub repositories, and then push them to a docker registry so that JupyterHub can launch user servers based on these images.You can use any registry that you like, though this guide covers how to properly configure the Google Container Registry (gcr.io).

You need to provide BinderHub with proper credentials so it can push images to the Google Container Registry. You can do so by creating a service account that has authorization to push to Google Container Registry:

  1. Go to console.cloud.google.com
  2. Make sure your project is selected
  3. Click <top-left menu w/ three horizontal bars> -> IAM & Admin -> Service Accounts menu option
  4. Click Create service account
  5. Give your account a descriptive name such as “binderhub-builder”
  6. Click Role -> Storage -> Storage Admin menu option
  7. Check Furnish new private key
  8. Leave key type as default of JSON
  9. Click Create

These steps will download a JSON file to your computer. The JSON file contains the password that can be used to push Docker images to the gcr.io registry.


Don’t share the contents of this JSON file with anyone. It can be used to gain access to your google cloud account!


Make sure to store this JSON file as you cannot generate a second one without re-doing the steps above.

Now that our cloud resources are set up, it’s time to Set up BinderHub.