- Learning Ceph(Second Edition)
- Anthony D'Atri Vaibhav Bhembre Karan Singh
- 559字
- 2021-07-08 09:43:54
RADOS Block Device (RBD)
The RBD service is perhaps the most familiar, and at many sites is the primary or even only application of Ceph. It presents block (also known as volume) storage in a fashion that with traditional HDD/SDD applications can consume with little or no adjustment. In this way, it is somewhat analogous with facets of VxVM (?), Solaris Disk Suite (SVM)(?), the Linux MD/LVM system, ISCSI or Fibre Channel (?) appliance, or even a ZFS (?) ZVOL. RBD volumes, however, are natively available to multiple servers across the network.
One can build a filesystem directly upon an RBD volume, often as the boot device of a virtual machine in which case the hypervisor is the client of the RBD service and presents the volume to the guest operating system via the virtio or emulation driver. Other uses include direct raw use by databases, direct attachment to a physical or virtual machine via a kernel driver. Some users find value in building logical volumes within their operating system instance on top of multiple RBD volumes in order to achieve performance or expansion goals. Block storage is appropriate when a disk-like resource is desired, and provides consistent performance and latency. Capacity is provisioned in discrete, disjointed chunks, so scaling up or down can be awkward and complex. Tools such as ZFS or a volume manager such as Linux LVM can mitigate this somewhat, but applications with highly variable volumes of data—think fluctuation of orders of magnitude—may be better suited to an object storage model.
RBD volume operations include the usual data reads and writes as well as creation and deletion. Snapshots can be managed for archival, checkpointing, and deriving related volumes. OpenStack's Nova, Cinder, and Glance services (Chapter 11, Performance and Stability Tuning) utilize RBD snapshots for instances, abstracted volumes, and guest OS images respectively. There is a facility to replicate/mirror RBD volumes between clusters or even sites for high availability and disaster recovery.
RBD volumes are often used transparently by virtual machines and abstractions including OpenStack Cinder and Glance, but applications and users can exploit them as well via the rbd command line and programmatically via librbd.
The following is an example use case:
The author of this chapter needed to deploy a system of yum repo mirrors within OpenStack clouds for tenant use. CPU and RAM requirements were low, but a fair amount of storage was needed to mirror the growing collections of upstream rpm and metadata files for multiple versions of two Linux distributions. A small instance flavor was chosen with 4 GB RAM and one vCPU, but only a 50 GB virtual disk volume. That 50 GB volume, which itself mapped to an RBD volume, quickly filled up as new package versions and new distributions were added. The OpenStack Cinder interface to RBD was used to provision a 500 GB volume that was then attached to the instance, where the virtio driver presented it as /dev/vdb. An EXT4 filesystem was created on that device and an entry added to /etc/fstab to mount it at each boot, and the payload data was moved over to its capacious new home.
Alert readers might suggest simply resizing the original volume. This may be possible in some environments, but is more complex and requires additional steps.
- Extending Jenkins
- The Complete Rust Programming Reference Guide
- UML和模式應(yīng)用(原書第3版)
- Mobile Web Performance Optimization
- ThinkPHP 5實(shí)戰(zhàn)
- Python for Secret Agents:Volume II
- Microsoft Application Virtualization Cookbook
- Java程序設(shè)計(jì)與實(shí)踐教程(第2版)
- Backbone.js Blueprints
- 你不知道的JavaScript(中卷)
- Hands-On Swift 5 Microservices Development
- Learning Probabilistic Graphical Models in R
- Java高并發(fā)核心編程(卷1):NIO、Netty、Redis、ZooKeeper
- Clojure for Java Developers
- scikit-learn Cookbook(Second Edition)