十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
这篇文章给大家分享的是有关ceph如何实现指定OSD创建pool之class的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。
我们提供的服务有:成都网站建设、成都做网站、微信公众号开发、网站优化、网站认证、云县ssl等。为超过千家企事业单位解决了网站和推广的问题。提供周到的售前咨询和贴心的售后服务,是有科学管理、有技术的云县网站制作公司
前面我们做指定osd创建pool本质是选择部分osd(假设为不同属性的osd)重新构建一个osd逻辑树,再针对新的逻辑树创建一个crush_rule,设置pool的crush_rule 就可以到达目的。但是后面发现还有一种更为方便的办法不需要单独创建逻辑树,只需要添加一个新的crush_rule 并对于到新的磁盘class(磁盘有多个class属性,子节点host就有多个class)。
[root@ceph-node1 opt]# cat decrushmap # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable chooseleaf_stable 1 tunable straw_calc_version 1 tunable allowed_bucket_algs 54 # devices device 0 osd.0 class hdd device 1 osd.1 class ssd device 2 osd.2 class hdd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class ssd device 6 osd.6 class hdd device 7 osd.7 class hdd device 8 osd.8 class hdd device 9 osd.9 class hdd device 10 osd.10 class hdd device 11 osd.11 class hdd # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ceph-node1 { id -3 # do not change unnecessarily id -4 class hdd # do not change unnecessarily id -15 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.0 weight 0.029 item osd.1 weight 0.029 } host ceph-node2 { id -5 # do not change unnecessarily id -6 class hdd # do not change unnecessarily id -16 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.2 weight 0.029 item osd.3 weight 0.029 } host ceph-node3 { id -7 # do not change unnecessarily id -8 class hdd # do not change unnecessarily id -17 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.4 weight 0.029 item osd.5 weight 0.029 } host ceph-node4 { id -9 # do not change unnecessarily id -10 class hdd # do not change unnecessarily id -18 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.6 weight 0.029 item osd.7 weight 0.029 } host ceph-node5 { id -11 # do not change unnecessarily id -12 class hdd # do not change unnecessarily id -19 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.8 weight 0.029 item osd.9 weight 0.029 } host ceph-node6 { id -13 # do not change unnecessarily id -14 class hdd # do not change unnecessarily id -20 class ssd # do not change unnecessarily # weight 0.058 alg straw2 hash 0 # rjenkins1 item osd.10 weight 0.029 item osd.11 weight 0.029 } root default { id -1 # do not change unnecessarily id -2 class hdd # do not change unnecessarily id -21 class ssd # do not change unnecessarily # weight 0.354 alg straw2 hash 0 # rjenkins1 item ceph-node1 weight 0.059 item ceph-node2 weight 0.059 item ceph-node3 weight 0.059 item ceph-node4 weight 0.059 item ceph-node5 weight 0.059 item ceph-node6 weight 0.059 } # rules rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default class hdd step chooseleaf firstn 0 type host step emit } rule replicated_ssd { id 1 type replicated min_size 1 max_size 10 step take default class ssd step chooseleaf firstn 0 type host step emit }
后面在创建pool的时候选择新的crush_rule replicated_ssd
问题就解决了
感谢各位的阅读!关于“ceph如何实现指定OSD创建pool之class”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!