May 5, 2015

Using ceph-ansible to deploy and test a multi-node ceph cluster


Our team is presently working on building a Ceph Block Volume Plugin for Kubernetes. As such, I wanted a quick and easy way for everyone to be able to deploy a local Ceph Cluster in Virtual Machines so we can test the plugins ability to provision a Ceph Block Device and mount it to a given Docker Host (or Kubernetes Node) from a development environment or another virtual machine.

After spending a few days trying to find the most convenient solution, I settled on ceph-ansible as it uses a combination of vagrant (to provision the VMs) and ansible (to configure them) and the entire cluster is launched with literally one command (vagrant up). So here's how it works:



On the Developer's Machine:


1) Install Vagrant and install your Vagrant compatible Hypervisor of choice (I use VirtualBox as I find it has the broadest vagrant box support and I can't use KVM because I am on a Mac)

2) Install Ansible

3) Clone the ceph-ansible repository
  # git clone https://github.com/ceph/ceph-ansible.git
  # cd ceph-ansible

4) Edit the  ceph-ansible/roles/ceph-common/defaults/main.yml and set the following values to “false" in the CEPH CONFIGURATION section.
  cephx_require_signatures: false
  cephx_cluster_require_signatures: false
  cephx_service_require_signatures: false

5) Deploy the Ceph Cluster
  # vagrant up

6) Check the Status of the Ceph Cluster you just deployed
  # vagrant ssh mon0 -c "sudo ceph -s"

7) Copy the ceph configuration file and ceph keyring to each server you plan to mount ceph block devices onto (such as a Fedora 21 server demonstrated in the diagram above).
  # vagrant ssh mon0
  # cd /etc/ceph/
  # sudo scp ceph.client.admin.keyring ceph.conf root@{IP of Fedora VM}:/etc/ceph/

Configuring the Ceph Client (Fedora 21 VM)


This section assumes that you have already provisioned another server to use as the means to create, format and mount a ceph block device onto. In the diagram above, this is the Fedora 21 VM.

1) Install the ceph client libraries
  # yum -y install ceph-common

2) Create this directory or you will see exceptions when using rbd commands
  # mkdir /var/run/ceph/

3) Disable and Stop firewalld
  # systemctl disable firewalld;  systemctl stop firewalld

4) Create a block device called "mydisk"
  # rbd create mydisk --size 4096

5) Map the block device from the server into your local block device list
  # rbd map mydisk --pool rbd --name client.admin

6) Verify that a new block device (rbd0) has been added 
  # ls -l /dev/rbd?
  brw-rw----. 1 root disk 252,  0 May  5 15:34 /dev/rbd0

7) Format the Block Device
  # sudo mkfs.ext4 -m0 /dev/rbd0 

8) Mount the Block Device for use
  # mkdir /mnt/mydisk
  # mount /dev/rbd0 /mnt/mydisk/

1 comment:

Jeff Vance said...
This comment has been removed by the author.