- Learning Ceph(Second Edition)
- Anthony D'Atri Vaibhav Bhembre Karan Singh
- 378字
- 2021-07-08 09:43:53
CephFS MetaData server (MDS)
In order to manage and provide the hierarchy of data as presented in the context of a familiar tree-organized filesystem, Ceph needs to store additional metadata given the semantics expected:
- Permissions
- Hierarchy
- Names
- Timestamps
- Owners
- Mostly POSIX compliant. mostly.
Unlike legacy systems, the CephFS MDS is designed to facilitate scaling. It is important to note that actual file data does not flow through the Ceph MDS: as with RBD volumes, CephFS clients use the RADOS system to perform bulk data operations directly against a scalable number of distributed OSD storage daemons. In a loose sense, the MDS implements a control plane while RADOS implements the data plane; in fact, the metadata managed by Ceph's MDS also resides on the OSDs via RADOS alongside payload data / file contents:

It is important to note that MDS servers are only required if you're going to use the CephFS file-based interface; the majority of clusters that provide only block and / or object user-facing services do not need to provision them at all. It is also important to note that CephFS is best limited to use among servers—a B2B service if you will—as opposed to B2C. Some Ceph operators have experimented with running NFS or Samba (SMB/CIFS) to provide services directly to workstation clients, but this should be considered as advanced.
Although CephFS is the oldest of Ceph's user-facing interfaces, it has not received as much user and developer attention as have the RBD block service and the common RADOS core. CephFS in fact was not considered ready for production until the Jewel release in early 2016, and as I write still has certain limitations, notably, running multiple MDSes in parallel for scaling and high availability is still problematic. While one can and should run multiple MDSes, with the Kraken release only one can safely be active at any time. Additional MDSes instances are advised to operate in a standby role for failover in case the primary fails. With the Luminous release, multiple active MDS instances are supported. It is expected that future releases will continue to improve the availability and scaling of the MDS services.
http://docs.ceph.com/docs/master/cephfs/best-practices
and
http://docs.ceph.com/docs/master/cephfs/posix
- INSTANT Mock Testing with PowerMock
- Java語言程序設計
- C語言程序設計實踐教程(第2版)
- Learning Cython Programming(Second Edition)
- Java 開發從入門到精通(第2版)
- 深入淺出Windows API程序設計:編程基礎篇
- Java程序員面試算法寶典
- 深入淺出Serverless:技術原理與應用實踐
- 硅谷Python工程師面試指南:數據結構、算法與系統設計
- TMS320LF240x芯片原理、設計及應用
- Java EE企業級應用開發教程(Spring+Spring MVC+MyBatis)
- ASP.NET求職寶典
- Python一行流:像專家一樣寫代碼
- Python Programming for Arduino
- 物聯網系統架構設計與邊緣計算(原書第2版)