To install and configure the Ceph RGW, we will use the ceph-ansbile from ceph-node1, which is our ceph-ansible and one of the monitor node. Log in to the ceph-node1 and perform the following commands:
Make sure that the ceph-node1 can reach the rgw-node1 over the network by using the following command:
# ping rgw-node1 -c 1
Allow ceph-node1 a password-less SSH login to rgw-node1 and test the connection.
The root password for rgw-node1 is the same as earlier, that is, vagrant. # ssh-copy-id rgw-node1 # ssh rgw-node1 hostname
Add rgw-node1 to the ceph-ansible hosts file and test the Ansible ping command:
# ansible all -m ping
Update all.yml file to install and configure the Ceph RGW in the VM rgw-node1:
[root@ceph-node1 ceph-ansible]# cd /usr/share/ ceph-ansible/group_vars/ [root@ceph-node1 group_vars]# vim all.yml
Enable the radosgw_civetweb_port and radosgw_civetweb_bind_ip option. In this book, rgw-node1 has IP 192.168.1.106 and we are using port 8080:
Change the directory back to /usr/share/ceph-ansible and then run the playbook, it will install and configure the RGW in rgw-node1:
$ cd .. $ ansible-playbook site.yml
Once ceph-ansible finishes the installation and configuration, you will have the following recap output:
Once it completes, you will have the radosgw daemon running in rgw-node1:
And you will notice in the following screenshot that we now have more pools which got created for RGW:
The Civetweb web server that is embedded into the radosgw daemon should now be running on the specified port, 8080:
You will have the following entries related to this RGW in rgw-node1 VM /etc/ceph/ceph.conf: