ceph部署
集群节点规划
Yum源设置
yum设置
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
cat << EOM > /etc/yum.repos.d/ceph.repo
[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md
EOM
[root@ceph2 yum.repos.d]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
- base: mirrors.aliyun.com
- extras: mirrors.aliyun.com
- updates: mirrors.aliyun.com
repo id repo name status
Ceph-SRPMS Ceph SRPMS packages 42
Ceph-aarch64 Ceph aarch64 packages 966
Ceph-noarch Ceph noarch packages 184
Ceph-x86_64 Ceph x86_64 packages 1,050
base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,072
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,770
extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 515
updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 4,857
repolist: 31,456
�
防火墙和selinux设置
[root@ceph3 yum.repos.d]# systemctl stop firewalld
[root@ceph3 yum.repos.d]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@ceph3 yum.repos.d]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@ceph3 yum.repos.d]# setenforce 0�
设置host和免密登陆
[root@ceph1 yum.repos.d]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.62 ceph1
192.168.31.63 ceph2
192.168.31.64 ceph3
[root@ceph1 yum.repos.d]# ssh-copy-id ceph1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph1 (192.168.31.62)' can't be established.
ECDSA key fingerprint is SHA256:OH5pyMdEvEG4x/oM18Ent4HwefrUAbgYJBjQ3Kq+7k8.
ECDSA key fingerprint is MD5:0c:95:e1:9d:6c:ed:ba:29:22:a8:e8:0a:70:1c:c1:75.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph1'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph1 yum.repos.d]# ssh-copy-id ceph2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph2 (192.168.31.63)' can't be established.
ECDSA key fingerprint is SHA256:/AH9vHyS5KmkRRlFEuE+dyV5ely4wylwzoFnX593cAw.
ECDSA key fingerprint is MD5:f1:89:e1:e6:f8:5e:dd:31:e7:99:61:d6:6e:d8:af:14.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph2'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph1 yum.repos.d]# ssh-copy-id ceph3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ceph3 (192.168.31.64)' can't be established.
ECDSA key fingerprint is SHA256:K2NvzcoOHxdoE16RgZAwm2pXxOw2xYa/lsLPR+7L2LY.
ECDSA key fingerprint is MD5:3b:1b:a1:b7:08:04:4c:10:ba:f8:0d:b3:c6:b1:5e:57.
Are you sure you want to continue connecting (yes/no)? ys
Please type 'yes' or 'no': yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ceph3's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'ceph3'"
and check to make sure that only the key(s) you wanted were added.
�
安装ntp,节点2和3和节点1同步
yum install ntp ntpdate
[root@ceph1 yum.repos.d]# ntpq -pn
remote refid st t when poll reach delay offset jitter
*84.16.73.33 .GPS. 1 u 20 64 1 218.562 1.376 1.363
+193.182.111.14 77.40.226.121 2 u 19 64 1 257.595 -5.400 3.540
-94.237.64.20 144.126.242.176 3 u 50 64 1 221.141 -57.972 2.431
+119.28.183.184 100.122.36.196 2 u 50 64 1 56.667 -11.148 1.622
�
修改/etc/ntp.conf
server 192.168.31.62 iburst�
systemctl restart ntpd�
root@ceph3 yum.repos.d]# ntpq -pn
remote refid st t when poll reach delay offset jitter
192.168.31.62 84.16.73.33 2 u 48 64 0 0.000 0.000 0.000
�
2、node1安装ceph-deploy
mkdir my-cluster
cd my-cluster
yum -y install ceph-deploy python-setuptools
ceph-deploy -h
查看版本 2.0.1
创建集群和第一个mon
ceph-deploy new ceph1 --public-network 10.152.194.0/24
如果初始化集群的时候没有指定public-network,那么后续就无法扩容mon,需要修改配置
3、安装ceph基础软件(每台主机)
yum list --showduplicates ceph�
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
- base: mirrors.aliyun.com
- extras: mirrors.aliyun.com
- updates: mirrors.aliyun.com
Installed Packages
ceph.x86_64 2:14.2.22-0.el7 @Ceph-x86_64
Available Packages
ceph.x86_64 2:14.1.0-0.el7 Ceph-x86_64
ceph.x86_64 2:14.1.1-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.0-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.1-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.2-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.3-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.4-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.5-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.6-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.7-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.8-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.9-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.10-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.11-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.12-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.13-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.14-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.15-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.16-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.17-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.18-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.19-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.20-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.21-0.el7 Ceph-x86_64
ceph.x86_64 2:14.2.22-0.el7�
查看可用版本,最新版就是14.2.22
yum -y install ceph ceph-mgr ceph-mon ceph-mds ceph-radosgw
4、mon初始化node1
ceph-deploy mon create-initial
推送密钥配置
ceph-deploy admin ceph1 ceph2 ceph3
ceph -s
5部署mgr
ceph-deploy mgr create ceph1
ceph -s
[root@ceph1 my-cluster]# ceph -s
cluster:
id: 3585d021-31ef-45e3-b469-db44a158dcf1
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
services:
mon: 1 daemons, quorum ceph1 (age 2m)
mgr: ceph1(active, since 4s)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
6添加osd
ceph-deploy osd create ceph1 --data /dev/sdb
ceph-deploy osd create ceph2 --data /dev/sdb
ceph-deploy osd create ceph3 --data /dev/sdb
查看集群状态
ceph -s
[root@ceph1 my-cluster]# ceph -s
cluster:
id: 3585d021-31ef-45e3-b469-db44a158dcf1
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
services:
mon: 1 daemons, quorum ceph1 (age 6m)
mgr: ceph1(active, since 3m)
osd: 3 osds: 3 up (since 6s), 3 in (since 6s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:
�
查看osd状态
ceph osd tree
7扩容mon
ceph-deploy mon add ceph2 --address 192.168.31.63
ceph-deploy mon add ceph3 --address 192.168.31.64
[root@ceph1 my-cluster]# ceph -s
cluster:
id: 3585d021-31ef-45e3-b469-db44a158dcf1
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
clock skew detected on mon.ceph2
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2s)
mgr: ceph1(active, since 6m)
osd: 3 osds: 3 up (since 2m), 3 in (since 2m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:
�
查看详细信息
ceph quorum_status --format json-pretty
"election_epoch": 12,
"quorum": [
0,
1,
2
],
"quorum_names": [
"ceph1",
"ceph2",
"ceph3"
],
"quorum_leader_name": "ceph1",
"quorum_age": 46,
"monmap": {
"epoch": 3,
"fsid": "3585d021-31ef-45e3-b469-db44a158dcf1",
"modified": "2023-03-23 16:14:59.007523",
"created": "2023-03-23 16:06:16.746777",
"min_mon_release": 14,
"min_mon_release_name": "nautilus",
"features": {
"persistent": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus"
],
"optional": []
},
"mons": [
{
"rank": 0,
"name": "ceph1",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "192.168.31.62:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "192.168.31.62:6789",
"nonce": 0
}
]
},
"addr": "192.168.31.62:6789/0",
"public_addr": "192.168.31.62:6789/0"
},
{
"rank": 1,
"name": "ceph2",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "192.168.31.63:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "192.168.31.63:6789",
"nonce": 0
}
]
},
"addr": "192.168.31.63:6789/0",
"public_addr": "192.168.31.63:6789/0"
},
{
"rank": 2,
"name": "ceph3",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "192.168.31.64:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "192.168.31.64:6789",
"nonce": 0
}
]
},
"addr": "192.168.31.64:6789/0",
"public_addr": "192.168.31.64:6789/0"
}
]
}
}
�
ceph mon stat
e3: 3 mons at {ceph1=[v2:192.168.31.62:3300/0,v1:192.168.31.62:6789/0],ceph2=[v2:192.168.31.63:3300/0,v1:192.168.31.63:6789/0],ceph3=[v2:192.168.31.64:3300/0,v1:192.168.31.64:6789/0]}, election epoch 12, leader 0 ceph1, quorum 0,1,2 ceph1,ceph2,ceph3�
ceph mon dump
epoch 3
fsid 3585d021-31ef-45e3-b469-db44a158dcf1
last_changed 2023-03-23 16:14:59.007523
created 2023-03-23 16:06:16.746777
min_mon_release 14 (nautilus)
0: [v2:192.168.31.62:3300/0,v1:192.168.31.62:6789/0] mon.ceph1
1: [v2:192.168.31.63:3300/0,v1:192.168.31.63:6789/0] mon.ceph2
2: [v2:192.168.31.64:3300/0,v1:192.168.31.64:6789/0] mon.ceph3
dumped monmap epoch 3
�
8扩容mgr
ceph-deploy mgr create ceph2 ceph3
root@ceph1 my-cluster]# ceph -s
cluster:
id: 3585d021-31ef-45e3-b469-db44a158dcf1
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 4m)
mgr: ceph1(active, since 10m), standbys: ceph2, ceph3
osd: 3 osds: 3 up (since 7m), 3 in (since 7m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:
�#禁用mon不安全模式
ceph config set mon auth_allow_insecure_global_id_reclaim false
root@ceph1 ~]# ceph -s
cluster:
id: 3585d021-31ef-45e3-b469-db44a158dcf1
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 4h)
mgr: ceph1(active, since 4h), standbys: ceph3, ceph2
osd: 3 osds: 3 up (since 4h), 3 in (since 22h)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:
文件存储安装
ceph-deploy mds create ceph1
ceph-deploy mds create ceph2
ceph-deploy mds create ceph3
[root@ceph1 my-cluster]# ceph -s
cluster:
id: 444cfc56-3584-4ec2-8d9a-5fd95763ce6f
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 34m)
mgr: ceph1(active, since 2h), standbys: ceph2, ceph3
mds: 3 up:standby
osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 897 GiB / 900 GiB avail
pgs:
创建资源池,默认是3个副本
ceph osd pool create cephfs-metadata 16 16
ceph osd pool create cephfs-data 16 16
[root@ceph1 my-cluster]# ceph osd lspools
1 cephfs-metadata
2 cephfs-data
[root@ceph1 my-cluster]# ceph -s
cluster:
id: 444cfc56-3584-4ec2-8d9a-5fd95763ce6f
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 42m)
mgr: ceph1(active, since 2h), standbys: ceph2, ceph3
mds: 3 up:standby
osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
data:
pools: 2 pools, 32 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 897 GiB / 900 GiB avail
pgs: 31.250% pgs unknown
18.750% pgs not active
16 active+clean
10 unknown
6 creating+peering
创建文件系统
[root@ceph1 my-cluster]# ceph fs new cephfs-demo cephfs-metadata cephfs-data
new fs with metadata pool 1 and data pool 2
[root@ceph1 my-cluster]# ceph fs ls
name: cephfs-demo, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
[root@ceph1 my-cluster]# ceph -s
cluster:
id: 444cfc56-3584-4ec2-8d9a-5fd95763ce6f
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 46m)
mgr: ceph1(active, since 2h), standbys: ceph2, ceph3
mds: cephfs-demo:1 {0=ceph1=up:active} 2 up:standby
osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
data:
pools: 2 pools, 32 pgs
objects: 22 objects, 2.2 KiB
usage: 3.0 GiB used, 897 GiB / 900 GiB avail
pgs: 32 active+clean
io:
client: 409 B/s wr, 0 op/s rd, 1 op/s wr
配置yum源
yum -y install ceph
定义secret文件
[root@ceph1 cephfs]# cat /app/my-cluster/ceph.client.admin.keyring
[client.admin]
key = AQDWclRkfw37FhAAHEWD0xqyDP2Q1VTIK1nUNg==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
cat admin.secret
AQDWclRkfw37FhAAHEWD0xqyDP2Q1VTIK1nUNg==
挂载
mount -t ceph 10.152.194.41:6789,10.152.194.42:6789,10.152.194.43:6789:/ /mnt/ceph -o name=admin,secretfile=admin.secret
或者
mount -t ceph 10.152.194.41:6789,10.152.194.42:6789,10.152.194.43:6789:/ /mnt/ceph -o name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
nfs-ganesha安装(dashboard服务器上)
rpm -qa |grep librgw
librgw2-14.2.7-0.el7.x86_64
rpm -qa |grep libcephfs
libcephfs2-14.2.7-0.el7.x86_64
[root@ceph1 ~]# cat /etc/yum.repos.d/nfs-ganesha.repo
[nfs-ganesha]
name=nfs-ganesha
baseurl=http://us-west.ceph.com/nfs-ganesha/rpm-V2.7-stable/nautilus/x86_64/
enabled=1
gpgcheck=0
priority=1
yum install nfs-ganesha nfs-ganesha-ceph nfs-ganesha-rgw -y
修改/etc/ganesha/ganesha.conf配置文件
%include "/etc/ganesha/ceph.conf"
[root@ceph1 ganesha]# cat /etc/ganesha/ceph.conf
NFSv4
{
Minor_Versions = 1,2;
}
MDCACHE {
Dir_Chunk = 0;
NParts = 1;
Cache_Size = 1;
}
EXPORT
{
Export_ID=2000;
Protocols = 4;
Transports = TCP;
Path = /;
Pseudo = /shares/;
Access_Type = RW;
Attr_Expiration_Time = 0;
Squash = No_root_squash;
SecType = sys;
FSAL {
Name = CEPH;
}
}
systemctl start nfs-ganesha.service
systemctl status nfs-ganesha.service
systemctl enable nfs-ganesha.service
挂载
mount -t nfs -o nfsvers=4.1,proto=tcp 10.152.194.41:/shares /mnt/nfs
dashboard部署
1、在每个mgr节点安装
yum install ceph-mgr-dashboard -y
2、开启mgr功能
ceph mgr module enable dashboard --force
3、生成并安装自签名的证书
ceph dashboard create-self-signed-cert
4、创建一个dashboard登录用户名密码
echo "admin" > ~/passwd.txt
ceph dashboard ac-user-create admin -i ~/passwd.txt administrator
5、查看服务访问方式
ceph mgr services
"dashboard": "https://ceph1:8443/"