1. Create your cloud resources

BinderHub is built to run on top of Kubernetes, a distributed cluster manager. It uses a JupyterHub to launch/manage user servers, as well as a docker registry to cache images.

To create your own BinderHub, you’ll first need to set up a properly configured Kubernetes Cluster on the cloud, and then configure the various components correctly. The following instructions will assist you in doing so.

Note

BinderHub uses a JupyterHub running on Kubernetes for much of its functionality. For information on setting up and customizing your JupyterHub, we recommend reading the Zero to JupyterHub Guide.

1.1. Setting up Kubernetes on Google Cloud

Note

BinderHub is built to be cloud agnostic, and can run on various cloud providers (as well as bare metal). However, here we only provide instructions for Google Cloud as it has been the most extensively-tested. If you would like to help with adding instructions for other cloud providers, please contact us!

First, install Kubernetes by following the instructions in the Zero to JupyterHub guide. When you’re done, move on to the next section.

1.2. Install Helm

Helm, the package manager for Kubernetes, is a useful tool for: installing, upgrading and managing applications on a Kubernetes cluster. Helm packages are called charts. We will be installing and managing JupyterHub on our Kubernetes cluster using a Helm chart.

Charts are abstractions describing how to install packages onto a Kubernetes cluster. When a chart is deployed, it works as a templating engine to populate multiple yaml files for package dependencies with the required variables, and then runs kubectl apply to apply the configuration to the resource and install the package.

Helm has two parts: a client (helm) and a server (tiller). Tiller runs inside of your Kubernetes cluster as a pod in the kube-system namespace. Tiller manages both, the releases (installations) and revisions (versions) of charts deployed on the cluster. When you run helm commands, your local Helm client sends instructions to tiller in the cluster that in turn make the requested changes.

Note

These instructions are for Helm 2. Helm 3 includes several major breaking changes and is not yet officially supported, but preliminary instructions are available for testing.

1.2.1. Installation

While several methods to install Helm exists, the simplest way to install Helm is to run Helm’s installer script in a terminal:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

1.2.2. Initialization

After installing helm on your machine, initialize Helm on your Kubernetes cluster:

  1. Set up a ServiceAccount for use by tiller.

    kubectl --namespace kube-system create serviceaccount tiller
    
  2. Give the ServiceAccount full permissions to manage the cluster.

    Note

    If you know your kubernetes cluster does not have RBAC enabled, you must skip this step. Most users can ignore this note.

    kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
    

    See our RBAC documentation for more information.

  3. Initialize helm and tiller.

    helm init --service-account tiller --wait
    

    This command only needs to run once per Kubernetes cluster, it will create a tiller deployment in the kube-system namespace and setup your local helm client. This command installs and configures the tiller part of Helm (the whole project, not the CLI) on the remote kubernetes cluster. Later when you want to deploy changes with helm (the local CLI), it will talk to tiller and tell it what to do. tiller then executes these instructions from within the cluster.

    Note

    If you wish to install helm on another computer, you won’t need to setup tiller again but you still need to initialize helm:

    helm init --client-only
    

1.2.3. Secure Helm

Ensure that tiller is secure from access inside the cluster:

kubectl patch deployment tiller-deploy --namespace=kube-system --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'

tiller s port is exposed in the cluster without authentication and if you probe this port directly (i.e. by bypassing helm) then tiller s permissions can be exploited. This step forces tiller to listen to commands from localhost (i.e. helm) only so that e.g. other pods inside the cluster cannot ask tiller to install a new chart granting them arbitrary, elevated RBAC privileges and exploit them. More details here.

1.2.4. Verify

You can verify that you have the correct version and that it installed properly by running:

helm version

It should in less then a minute, when tiller on the cluster is ready, be able to provide output like below. Make sure you have at least version 2.11.0 and that the client (helm) and server version (tiller) is matching!

Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Now that you’ve installed Kubernetes and Helm, it’s time to Set up the container registry.