官术网_书友最值得收藏!

 How to do it...

As we did earlier, we will set up a Ceph client machine using Vagrant and VirtualBox. We will use the same Vagrantfile that we cloned in the last chapter. Vagrant will then launch a CentOS 7.3 virtual machine that we will configure as a Ceph client:

  1. From the directory where we cloned the Ceph-Cookbook-Second-Edition GitHub repository, launch the client virtual machine using Vagrant:
         $ vagrant status client-node1
$ vagrant up client-node1
  1. Log in to client-node1 and update the node:
      $ vagrant ssh client-node1
$ sudo yum update -y

The username and password that Vagrant uses to configure virtual machines is vagrant,  and Vagrant has sudo rights. The default password for the root user is vagrant.

  1. Check OS and kernel release (this is optional):
        # cat /etc/centos-release
# uname -r
  1. Check for RBD support in the kernel:
        # sudo modprobe rbd
  1. Allow ceph-node1 monitor machine to access client-node1 over SSH. To do this, copy root SSH keys from ceph-node1 to client-node1 Vagrant user. Execute the following commands from ceph-node1 machine until otherwise specified:
        ## Log in to the ceph-node1 machine
$ vagrant ssh ceph-node1
$ sudo su -
# ssh-copy-id vagrant@client-node1

Provide a one-time Vagrant user password, that is, vagrant, for client-node1. Once the SSH keys are copied from ceph-node1 to client-node1, you should able to log in to client-node1 without a password.

  1. Using Ansible, we will create the ceph-client role which will copy the Ceph configuration file and administration keyring to the client node. On our Ansible administration node, ceph-node1, add a new section [clients] to the /etc/ansible/hosts file:
  1. Go to the /etc/ansible/group_vars directory on ceph-node1 and create a copy of clients.yml from the clients.yml.sample:
        # cp clients.yml.sample clients.yml

You can instruct the ceph-client to create pools and clients by updating the clients.yml file. By uncommenting the user_config and setting to true you have the ability to define customer pools and client names altogether with Cephx capabilities.

  1. Run the Ansible playbook from ceph-node1:
        root@ceph-node1 ceph-ansible # ansible-playbook site.yml
  1. On client-node1 check and validate that the keyring and ceph.conf file were populated into the /etc/ceph directory by Ansible:
  1. On client-node1 you can validate that the Ceph client packages were installed by Ansible:
  1. The client machine will require Ceph keys to access the Ceph cluster. Ceph creates a default user, client.admin, which has full access to the Ceph cluster and Ansible copies the client.admin key to client nodes. It's not recommended to share client.admin keys with client nodes. A better approach is to create a new Ceph user with separate keys and allow access to specific Ceph pools.
    In our case, we will create a Ceph user, client.rbd, with access to the RBD pool. By default, Ceph Block Devices are created on the RBD pool:
  1. Add the key to client-node1 machine for client.rbd user:
  1. By this step, client-node1 should be ready to act as a Ceph client. Check the cluster status from the client-node1 machine by providing the username and secret key:

# cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring
### Since we are not using the default user client.admin we
need to supply username that will connect to the Ceph cluster

# ceph -s --name client.rbd

主站蜘蛛池模板: 游戏| 苏州市| 汶川县| 巴中市| 利津县| 古田县| 固安县| 三穗县| 兴安县| 奉贤区| 康马县| 珠海市| 丹凤县| 瑞昌市| 阳城县| 沙雅县| 井研县| 罗源县| 磐石市| 清镇市| 武邑县| 行唐县| 望谟县| 鸡泽县| 新郑市| 化州市| 台北县| 土默特左旗| 康马县| 澄江县| 平安县| 平凉市| 日土县| 东兴市| 鄂州市| 深圳市| 松原市| 商南县| 辉县市| 湟源县| 龙海市|