Feb 9, 2015

Building Distributed Containerized Applications using Kubernetes and GlusterFS

This post is a follow-on from Building a Simple LAMP Application using Docker

The Kubernetes project originated from Google and is an Apache Licensed platform for managing clustered containerized applications. While the project page provides a lot more detail, in the interest of time, I thought I'd provide a quick summary. Kubernetes is a distributed Master/Worker architecture and calls its workers "Minions". Containerized applications are described and deployed by a Kubernetes Pod. A Pod typically contains one or more containers that are generally intended to reside on the same Minion. Kubernetes makes it easy to ensure that a certain amount of Pod Replicas exist via a runtime component called the ReplicationController. Lastly, Kubernetes ships with its own Load Balancer called a "Service" and this will round robin requests between the POD Replicas that are running in the cluster. 

Lets walk through an actual use case (explicit instructions follow later). If I had a PHP Docker Image and I wanted to deploy it in my Kubernetes Cluster, I would write and submit a JSON or YAML Pod file that describes the intended deployment configuration of my PHP Container. I would then write and submit a JSON or YAML ReplicationController that specifies that I want exactly 2 PHP Pods running at one time and then I would finish by writing and submitting a JSON or YAML Service file that species how I want my PHP POD Replicas load balanced. This use case is demonstrated in the diagram below. Note that on Minion 3 the PHP Pods are not running because I specified in the ReplicationController that I only want 2 PHP Pod Replicas running.


As you may have noticed, this really is a simple architecture. Now that we've covered how to deploy containerized runtimes, lets take a look at what options are available within Kubernetes to gives PODs access to data. At present, the following options are provided:

- The ephemeral storage capacity that is available within the container when it is launched

- EmptyDir, which is temporary scratch space for a container that is provided by the Host

- HostDir, which is a Host directory that you can mount onto a directory in the container.

- GCEPersistentDisk, which are block devices that are made available by the Google Compute Engine Block Service.

Given that using GCE Block Devices is really only something that you would consider if you were running in GCE, this only really leaves the HostDir option as a means to obtain durability for any kind of Kubernetes persistence on premise.

To explore how the HostDir option might be used, lets assume that you want to build the same load balanced, clustered PHP use case. One approach would be to copy the web content you want each PHP container to serve to the same local directory (/data) on each and every single Kubernetes Minion. One would then specify that directory as the HostDir parameter that is mounted onto /var/www/html in the container. This works well, but it swiftly becomes operationally onerous when you have to make updates to the web content as you now have to copy it out to every single minion in the cluster. In this scenario, it would be much easier if you could store the web content in one central place in the cluster and then provide a mount of that central place as the HostDir parameter.

One way to do this is to store the web content in a distributed file system and then mount the distributed file system onto each Minion. To demonstrate this example, we are going to use GlusterFS, which is a POSIX Compliant Distributed Filesystem. This means it looks just like your local ext4 or XFS filesystem to the applications that are using it. The diagram below displays how each Kubernetes Minion (and therefore Docker Host) has its own FUSE mount to the GlusterFS Distributed File System.


Great, so how do I do this?

1) Firstly, you're going to need a GlusterFS volume. If you don’t have one you can build one reasonably quickly by using this vagrant recipe from Jay Vyas.

2) You're going to need a working Kubernetes cluster. If you don't have one, you can follow this tutorial for how to set one up in Fedora.

3) On each Minion within the cluster, create a FUSE mount of the GlusterFS volume by running the following commands (This assumes gluster-1.rhs is the FQDN of a server in your Gluster Storage Pool provisioned by Vagrant and that your volume is called MyVolume):

# mkdir -p /mnt/glusterfs/
# yum install -y glusterfs-fuse
# mount -t glusterfs gluster-1.rhs:/MyVolume /mnt/glusterfs  

4) On a Kubernetes Minion, copy your web content to /mnt/glusterfs/php/ in the Distributed File System. If you just want a simple test you can create a helloworld.html in /mnt/glusterfs/php. Then shell into a different Kubernetes Minion and validate that you can see the file(s) you just created in /mnt/glusterfs/php. If you can see these files, it means that your Gluster volume is properly mounted.

5) Build and Submit a ReplicationController that produces a 2 Node PHP Farm that serves content from the distributed FileSystem by running the following commands on your Kubernetes Master:

# wget https://raw.githubusercontent.com/wattsteve/kubernetes/master/example-apps/php_controller_with_fuse_volume.json
# kubectl create -f php_controller_with_fuse_volume.json

6) Build and Submit the Load Balancer Service for the PHP Farm

# wget https://raw.githubusercontent.com/wattsteve/kubernetes/master/example-apps/php_service.json
# kubectl create -f php_service.json

7) Query the available services to obtain the IP that the Load Balancer Service is running on and submit a web request to test your setup.

# kubetctl get services
NAME                LABELS                                    SELECTOR            IP                  PORT
kubernetes          component=apiserver,provider=kubernetes   <none>              10.254.83.120       443
kubernetes-ro       component=apiserver,provider=kubernetes   <none>              10.254.165.218      80
php-master          name=php-master                           name=php_controller     10.254.66.0         80

# curl http://10.256.66.0/myfile.php

Thanks to Bradley Childs, Huamin Chen, Mark Turansky, Jay Vyas and Tim St. Clair for their help in putting this solution together.

No comments: