Openstack Ansible Ceph Configuration

Openstack Ansible Ceph Configuration

Openstack-Ansible doesn’t deploy ceph for you, if you want to do that there are excellent ceph roles from Sebastien Han. However openstack-ansible does have support for configuring openstack services as ceph clients. Specifically glance images, cinder volumes and nova ephemeral disks. This article is a reminder mostly for myself about the necessary variables needed to configure openstack-ansible for utilising ceph.

Requirements:

  1. Existing ceph cluster that is accessible from your deployment host (ansible will login via ssh key to retrieve ceph.conf and keyrings)
  2. Ceph users: You can use one for each openstack service or share one for all the services. Instructions for adding ceph users.

 Configuration

  1. Configure user_variables: /etc/openstack_deploy/user_variables.yml
    # General Ceph Config
    ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror
    ceph_stable_release: hammer
    cephx: true
    ceph_mons:
     - 1.2.3.4 #add your ceph mon IPs here, my single vm test ceph cluster only has one mon. 
    
    # Glance
    glance_default_store: rbd
    glance_ceph_client: cinder # I'm using the same ceph user for all services
    glance_rbd_store_pool: rbd
    glance_rbd_store_chunk_size: 8
    
    # Nova
    nova_libvirt_images_rbd_pool: rbd # I'm also using the same pool, not a good idea in production. 
    nova_ceph_client: cinder
    
    # Cinder, note additional config is required in user_config
    cinder_ceph_client: cinder
    
  2. Configure openstack_user_config: /etc/openstack_deploy/openstack_user_config. The default config has a single lvm backend, for ceph, remove that and replace it with an rbd backend:
    storage_hosts:
      aio1:
        ip: 4.3.2.1
        container_vars:
          cinder_backends:
            limit_container_types: cinder_volume
            rbd:
              volume_group: cinder-volumes
              volume_driver: cinder.volume.drivers.rbd.RBDDriver
              volume_backend_name: rbd
              rbd_pool: rbd
              rbd_ceph_conf: /etc/ceph/ceph.conf
              rbd_user: cinder
  3. Run playbooks (via run-playbooks.sh or openstack-ansible setup-everything.yml)
  4. Check the openstack logs, and verify that the relevant images are being created via “rbd ls” on a ceph node.
  5. Done.