ceph支持zfs

ceph 支持zfs的操作
查看集群的状态:

    cluster 042f4922-d19f-4e00-8025-7f7b0ff2aa9a
     health HEALTH_WARN
            too many PGs per OSD (1568 > max 300)
     monmap e1: 3 mons at {vm181=192.168.19.x:6789/0,vm182=192.168.19.x:6789/0,vm183=192.168.19.x:6789/0}
            election epoch 34, quorum 0,1,2 vm181,vm182,vm183
      fsmap e6: 1/1/1 up {0=vm183=up:active}, 1 up:standby
     osdmap e80: 6 osds: 6 up, 6 in
            flags sortbitwise
      pgmap v657: 3136 pgs, 18 pools, 3704 bytes data, 191 objects
            31212 MB used, 228 GB / 259 GB avail
                3136 active+clean

查看osd tree

[root@vm181 ~]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-3 6.00000 root hdd                                             
-2 2.00000     host vm181_hdd                                   
 0 1.00000         osd.0           up  1.00000          1.00000 
 1 1.00000         osd.1           up  1.00000          1.00000 
-4 2.00000     host vm182_hdd                                   
 2 1.00000         osd.2           up  1.00000          1.00000 
 3 1.00000         osd.3           up  1.00000          1.00000 
-5 2.00000     host vm183_hdd                                   
 4 1.00000         osd.4           up  1.00000          1.00000 
 5 1.00000         osd.5           up  1.00000          1.00000 
-1       0 root default  

查看配置文件

[osd.1]
host = vm181
journal dio = false
filestore zfs snapshot = true
osd journal = /var/lib/ceph/osd/ucsm-1/journal

查看zfs pool

  pool: osd
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	osd         ONLINE       0     0     0
	  vda       ONLINE     

发表评论

您的电子邮箱地址不会被公开。