- Learning Ceph(Second Edition)
- Anthony D'Atri Vaibhav Bhembre Karan Singh
- 401字
- 2021-07-08 09:43:52
MONs
Of all the nomenclature and jargon within the Ceph ecosystem, Ceph MONs are perhaps the most misleadingly named. While MONs do monitor cluster status, they are much more as well. They act as arbiters, traffic cops, and physicians for the cluster as a whole. As with OSDs, a Ceph MON is, strictly speaking, a daemon process (ceph-mon) that communicates with peer MONs, OSDs, and users, maintaining and distributing various information vital to cluster operations. In practice, the term is also used to refer to the servers on which these daemons run, that is Monitor nodes, mon nodes, or simply mons.
As with all other Ceph components, MONs need to be distributed, redundant, and highly available while also ensuring strict data consistency at all times. MONs accomplish this by participating in a sophisticated quorum that utilizes an algorithm called PAXOS. It is recommended to provision at least three for production clusters, but always an odd number to avoid a problematic situation known as split brain where network issues prevent some members from talking to each other, with the potential for more than one believing it is in charged and, worse yet, data divergence. Readers familiar with other clustering technologies such Oracle Solaris Cluster (?) may already be familiar with some of these concepts.
Among the data managed by Ceph MONs are maps of OSDs, other MONs, placement groups, and the CRUSH map,which describes where data should be placed and found. MONs are thus distributors of this data: they distribute initial state and updates to each other, Ceph OSDs, and Ceph clients. Alert readers might ask at this point, Hey, you said Ceph doesn't have a bottlenecked centralized metadata store, who are you trying to kid?
The answer is that while these maps may be considered a type of metadata, they are data concerning the Ceph cluster itself, not actual user data. The secret sauce here is CRUSH, which will be described in more detail later in this chapter. The CRUSH algorithm operates on the CRUSH map and PG map so that both clients and the Ceph backend can independently determine where given data lives. Clients thus are kept up-to-date with all they need to perform their own calculations that direct them to their data within the cluster's constellation of OSDs. By enabling clients to dynamically determine where their data resides, Ceph enables scaling without choke points or bottlenecks
- GitLab Cookbook
- Cocos2D-X權威指南(第2版)
- Java 開發從入門到精通(第2版)
- Python Geospatial Development(Second Edition)
- INSTANT OpenNMS Starter
- R Deep Learning Cookbook
- Python編程實戰
- Unreal Engine 4 Shaders and Effects Cookbook
- MATLAB 2020從入門到精通
- 測試架構師修煉之道:從測試工程師到測試架構師
- C語言程序設計
- Hack與HHVM權威指南
- Selenium WebDriver Practical Guide
- Serverless工程實踐:從入門到進階
- Windows 10 for Enterprise Administrators