$ sudo lxd init Would you like to use LXD clustering? (yes/no) \[default=no\]: yes What name should be used to identify this node in the cluster? \[default=web\]: What IP address or DNS name should be used to reach this node? \[default=10.100.0.80\]: Are you joining an existing cluster? (yes/no) \[default=no\]: yes IP address or FQDN of an existing cluster node: 10.100.0.31 Cluster fingerprint: 79ec4bdfa32501a664b1adde03a2296f7d663a43676a422781668df1bec2ee12 You can validate this fingerprint by running "lxc info" locally on an existing node. Is this the correct fingerprint? (yes/no) \[default=no\]: yes Cluster trust password: All existing data is lost when joining a cluster, continue? (yes/no) \[default=no\] yes Would you like a YAML "lxd init" preseed to be printed? (yes/no) \[default=no\]:
集群管理 集群节点列表
1 2 3 4 5 6 7 8 9 10
$ lxc cluster list +---------+--------------------------+----------+--------+-------------------+ NAME URL DATABASE STATE MESSAGE +---------+--------------------------+----------+--------+-------------------+ vm02 https://10.100.0.33:8443 YES ONLINE fully operational +---------+--------------------------+----------+--------+-------------------+ vmsvr02 https://10.100.0.31:8443 YES ONLINE fully operational +---------+--------------------------+----------+--------+-------------------+ web https://10.100.0.80:8443 YES ONLINE fully operational +---------+--------------------------+----------+--------+-------------------+
###format: rbd device map {pool-name}/{image-name} --id {user-name} $ sudo rbd device map data bd: sysfs write failed RBD image feature set mismatch. You can disable features unsupported by the kernel with"rbd feature disable data object-map fast-diff deep-flatten". In some cases useful info is found in syslog - try"dmesg tail". rbd: map failed: (6) No such device or address $ sudo rbd feature disable data object-map fast-diff deep-flatten $ sudo rbd device map data /dev/rbd0
这里映射出来的块设备名字为/dev/rbd0,当做普通的块设备来使用就行了。
查看映射设备列表
1 2 3
$ sudo rbd device list id pool namespace image snap device 0 rbd data - /dev/rbd0
创建池 ceph文件系统需要驻留在pool上,至少需要创建一个data和一个metadata pool
1 2 3 4 5 6 7
$ sudo ceph osd pool create cephfs_data 128 pool 'cephfs_data' created
$ sudo ceph osd pool create cephfs_metadata 128 Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3) john@node6:~$ sudo ceph osd pool create cephfs_metadata 24 pool 'cephfs_metadata' created
创建文件系统
1 2 3 4
$ sudo ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 2 and data pool 1 $ sudo ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: \[cephfs_data \]
查看mds状态
1 2
$ sudo ceph mds stat cephfs:1 {0=node6=up:active} 2 up:standby