Kubernetes存储之Heketi管理GlusterFS
Kubernetes存储之Heketi管理GlusterFS
GlusterFS是一个可扩展,分布式文件系统,集成来自多台服务器上的磁盘存储资源到单一全局命名空间,以提供共享文件存储特点:可以扩展到几PB容量支持处理数千个客户端兼容POSIX接口使用通用硬件,普通服务器即可构建能够使用支持扩展属性的文件系统,例如ext4,XFS支持工业标准的协议,例如NFS,SMB提供很多高级功能,例如副本,配额,跨地域复制,快照以及bitrot检测支持根据不同工作负载进行调优
glusterfs卷的模式
glusterfs中的volume的模式有很多中,包括以下几种:分布卷(默认模式):即DHT, 也叫 分布卷: 将文件以hash算法随机分布到 一台服务器节点中存储。复制模式:即AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。条带模式:即Striped, 创建volume 时带 stripe x 数量: 将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。分布式条带模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 server = 4 个节点: 是DHT 与 Striped 的组合型。分布式复制模式:最少需要4台服务器才能创建。 创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。条带复制卷模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 replica 2 server = 4 个节点: 是 Striped 与 AFR 的组合型。三种模式混合: 至少需要8台 服务器才能创建。 stripe 2 replica 2 , 每4个节点 组成一个 组。
heketi介绍
heketi是一个提供RESTful API管理gfs卷的框架,能够在kubernetes、openshift、openstack等云平台上实现动态的存储资源供应,支持gfs多集群管理,便于管理员对gfs进行操作,在kubernetes集群中,pod将存储的请求发送至heketi,然后heketi控制gfs集群创建对应的存储卷。
heketi动态在集群内选择bricks构建指定的volumes,以确保副本会分散到集群不同的故障域内。
heketi还支持任意数量的glusterfs集群,以保证接入的云服务器不局限于单个glusterfs集群。
####生产环境建议glusterFS为外部:
主机名 | IP | 角色 |
---|---|---|
master | 192.168.200.100 | K8S-master |
node | 192.168.200.101 | K8S-node01 |
node | 192.168.200.102 | K8S-node02 |
heketi | 192.168.200.103 | Heketi |
GlusterFS-node | 192.168.200.104 | GlusterFS01 |
GlusterFS-node | 192.168.200.105 | GlusterFS02 |
GlusterFS-node | 192.168.200.106 | GlusterFS03 |
####这里为方便测试演示,采用三台
环境准备:
主机名 | IP | 角色 |
---|---|---|
master01 | 192.168.200.182 | k8s-master,glusterfs01,heketi |
node01 | 192.168.200.183 | k8s-node,glusterfs02 |
node02 | 192.168.200.184 | k8s-node,glusterfs03 |
1.使用kubeadm快速搭建kubernetes集群(glusterfs在kubernetes集群中需要以特权运行,需要在kube-apiserver中添加–allow-privileged=true参数以开启此功能,默认此版本的kubeadm已开启。)
略
2.安装GlusterFs,搭建GlusterFs集群(GlusterFS只需要安装并启动即可,不必组建受信存储池(trusted storage pools))
略
3.部署Heketi
1.安装heketi
#添加gluster yum源
#heketi-client:heketi客户端/命令行工具
[root@master01 ~]# yum -y install centos-release-gluster
[root@master01 ~]# yum -y install heketi heketi-client2.配置heketi.json
[root@master01 heketi]# cat heketi.json.bak
{"_port_comment": "Heketi Server Port Number","port": "8080","_use_auth": "Enable JWT authorization. Please enable for deployment","use_auth": false,"_jwt": "Private keys for access","jwt": {"_admin": "Admin has access to all APIs","admin": {"key": "My Secret"},"_user": "User only has access to /volumes endpoint","user": {"key": "My Secret"}},"_glusterfs_comment": "GlusterFS Configuration","glusterfs": {"_executor_comment": ["Execute plugin. Possible choices: mock, ssh","mock: This setting is used for testing and development."," It will not send commands to any node.","ssh: This setting will notify Heketi to ssh to the nodes."," It will need the values in sshexec to be configured.","kubernetes: Communicate with GlusterFS containers over"," Kubernetes exec api."],"executor": "mock","_sshexec_comment": "SSH username and private key file information","sshexec": {"keyfile": "path/to/private_key","user": "sshuser","port": "Optional: ssh port. Default is 22","fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"},"_kubeexec_comment": "Kubernetes configuration","kubeexec": {"host" :"https://kubernetes.host:8443","cert" : "/path/to/crt.file","insecure": false,"user": "kubernetes username","password": "password for kubernetes user","namespace": "OpenShift project or Kubernetes namespace","fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"},"_db_comment": "Database file name","db": "/var/lib/heketi/heketi.db","_loglevel_comment": ["Set log level. Choices are:"," none, critical, error, warning, info, debug","Default is warning"],"loglevel" : "debug"}
}
#修改
[root@master01 heketi]# cat heketi.json
{"_port_comment": "Heketi Server Port Number","port": "8080", #默认端口号"_use_auth": "Enable JWT authorization. Please enable for deployment","use_auth": true, #默认flase,不需要认证"_jwt": "Private keys for access","jwt": {"_admin": "Admin has access to all APIs","admin": {"key": "admin" #修改},"_user": "User only has access to /volumes endpoint","user": {"key": "admin" #修改}},"_glusterfs_comment": "GlusterFS Configuration","glusterfs": {"_executor_comment": ["Execute plugin. Possible choices: mock, ssh","mock: This setting is used for testing and development."," It will not send commands to any node.","ssh: This setting will notify Heketi to ssh to the nodes."," It will need the values in sshexec to be configured.","kubernetes: Communicate with GlusterFS containers over"," Kubernetes exec api."],#三种模式:# mock:测试环境下创建的volume无法挂载;# kubernetes:在GlusterFS由kubernetes创建时采用"executor": "ssh", #生产环境使用ssh或Kubernetes,这里采用ssh模式"_sshexec_comment": "SSH username and private key file information","sshexec": {"keyfile": "/etc/heketi/heketi_key", #密钥路径"user": "root", #用户为root"port": "22", "fstab": "/etc/fstab"},"_kubeexec_comment": "Kubernetes configuration","kubeexec": {"host" :"https://kubernetes.host:8443","cert" : "/path/to/crt.file","insecure": false,"user": "kubernetes username","password": "password for kubernetes user","namespace": "OpenShift project or Kubernetes namespace","fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"},"_db_comment": "Database file name","db": "/var/lib/heketi/heketi.db","_loglevel_comment": ["Set log level. Choices are:"," none, critical, error, warning, info, debug","Default is warning"],# 默认设置为debug,不设置时的默认值即是warning;# 日志信息输出在/var/log/message"loglevel" : "warning"}
}3.设置heketi免密访问glusterFS
# 选择ssh执行器,heketi服务器需要免密登陆GlusterFS集群的各节点;
# -t:秘钥类型;
# -q:安静模式;
# -f:指定生成秘钥的目录与名字,注意与heketi.json的ssh执行器中"keyfile"值一致;
# -N:秘钥密码,””即为空
[root@master01 ~]# ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""# heketi服务由heketi用户启动,heketi用户需要有新生成key的读赋权,否则服务无法启动
[root@master01 ~]# chown heketi:heketi /etc/heketi/heketi_key# 分发公钥;
# -i:指定公钥
[root@master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.200.182
[root@master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.200.183
[root@master01 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@192.168.200.1844.启动heketi
[root@master01 ~]# systemctl enable heketi && systemctl start heketi && systemctl status heketi#验证
[root@master01 ~]# curl 192.168.200.182:8080/hello
Hello from Heketi[root@master01 ~]#
4.设置GlusterFS集群
1.通过topology.json文件定义组建GlusterFS集群;
# topology指定了层级关系:clusters-->nodes-->node/devices-->hostnames/zone;
# node/hostnames字段的manage填写主机ip,指管理通道,在heketi服务器不能通过hostname访问GlusterFS节点时不能填写hostname;
# node/hostnames字段的storage填写主机ip,指存储数据通道,与manage可以不一样;
# node/zone字段指定了node所处的故障域,heketi通过跨故障域创建副本,提高数据高可用性质,如可以通过rack的不同区分zone值,创建跨机架的故障域;
# devices字段指定GlusterFS各节点的盘符(可以是多块盘),必须是未创建文件系统的裸设备
[root@master01 ~]# cat /etc/heketi/topology.json
{"clusters": [{"nodes": [{"node": {"hostnames": {"manage": ["192.168.200.182"],"storage": ["192.168.200.182"]},"zone": 1},"devices": ["/dev/sdb"]},{"node": {"hostnames": {"manage": ["192.168.200.183"],"storage": ["192.168.200.183"]},"zone": 2},"devices": ["/dev/sdb"]},{"node": {"hostnames": {"manage": ["192.168.200.184"],"storage": ["192.168.200.184"]},"zone": 3},"devices": ["/dev/sdb"]}]}]
}2.通过topology.json组建GlusterFS集群
# GlusterFS集群各节点的glusterd服务已正常启动,但不必组建受信存储池;
# heketi-cli命令行也可手动逐层添加cluster,node,device,volume等;
# “--server http://localhost:8080”:localhost执行heketi-cli时,可不指定;
# “--user admin --secret admin ”:heketi.json中设置了认证,执行heketi-cli时需要带上认证信息,否则报“Error: Invalid JWT token: Unknown user”错
[root@master01 ~]# heketi-cli --server http://192.168.200.182:8080 --user admin --secret admin topology load --json=/etc/heketi/topology.json[root@master01 ~]# heketi-cli --user admin --secret admin --server http://192.168.200.182:8080 node list
Id:1629d8d3fe1f562a1d5efedfc12159d0 Cluster:6814c5edd73936a4447f0516a2886a59
Id:238052ce88248e833f0a9b10f8483481 Cluster:6814c5edd73936a4447f0516a2886a59
Id:bc3610fa65c82c489a993974dc6823c0 Cluster:6814c5edd73936a4447f0516a2886a59# 查看heketi topology信息,此时volume与brick等未创建;
# 通过“heketi-cli cluster info”可以查看集群相关信息;
# 通过“heketi-cli node info”可以查看节点相关信息;
# 通过“heketi-cli device info”可以查看device相关信息
[root@master01 ~]# heketi-cli --user admin --secret admin topology infoCluster Id: 6814c5edd73936a4447f0516a2886a59File: trueBlock: trueVolumes:Nodes:Node Id: 1629d8d3fe1f562a1d5efedfc12159d0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 3Management Hostnames: 192.168.200.184Storage Hostnames: 192.168.200.184Devices:Id:b3bdfa6f860359cdc7bce973565f9af8 Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: 238052ce88248e833f0a9b10f8483481State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 1Management Hostnames: 192.168.200.182Storage Hostnames: 192.168.200.182Devices:Id:af8ba5c90b782e7762071b800f669e7c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: bc3610fa65c82c489a993974dc6823c0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 2Management Hostnames: 192.168.200.183Storage Hostnames: 192.168.200.183Devices:Id:c834eb67bea5bd73142bde1d71d3167c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:
5.K8S集群动态挂载GlusterFS存储
1. 基于StorageClass的动态存储流程kubernetes共享存储供应模式:
静态模式(Static):集群管理员手工创建PV,在定义PV时需设置后端存储的特性;
动态模式(Dynamic):集群管理员不需要手工创建PV,而是通过StorageClass的设置对后端存储进行描述,标记为某种"类型(Class)";此时要求PVC对存储的类型进行说明,系统将自动完成PV的创建及与PVC的绑定;PVC可以声明Class为"",说明PVC禁止使用动态模式。基于StorageClass的动态存储供应过程:
1)集群管理员预先创建存储类(StorageClass);
2)用户创建使用存储类的持久化存储声明(PVC:PersistentVolumeClaim);
3)存储持久化声明通知系统,它需要一个持久化存储(PV: PersistentVolume);
4)系统读取存储类的信息;
5)系统基于存储类的信息,在后台自动创建PVC需要的PV;
6)用户创建一个使用PVC的Pod;
7)Pod中的应用通过PVC进行数据的持久化;
8)而PVC使用PV进行数据的最终持久化处理。2.定义StorageClass
# provisioner:表示存储分配器,需要根据后端存储的不同而变更;
# reclaimPolicy: 默认即“Delete”,删除pvc后,相应的pv及后端的volume,brick(lvm)等一起删除;设置为”Retain”时则保留数据,需要手工处理
# resturl:heketi API服务提供的url;
# restauthenabled:可选参数,默认值为"false",heketi服务开启认证时必须设置为"true";
# restuser:可选参数,开启认证时设置相应用户名;
# secretNamespace:可选参数,开启认证时可以设置为使用持久化存储的namespace;
# secretName:可选参数,开启认证时,需要将heketi服务的认证密码保存在secret资源中;
# clusterid:可选参数,指定集群id,也可以是1个clusterid列表,格式为"id1,id2";
# volumetype:可选参数,设置卷类型及其参数,如果未分配卷类型,则有分配器决定卷类型;如"volumetype: replicate:3"表示3副本的replicate卷,"volumetype: disperse:4:2"表示disperse卷,其中‘4’是数据,’2’是冗余校验,"volumetype: none"表示distribute卷#
[root@master01 ~]# mkdir -p heketi
[root@master01 ~]# cd heketi/
[root@master01 heketi]# vim gluster-heketi-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: gluster-heketi-storageclass
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
parameters:resturl: "http://192.168.200.182:8080" #heketi IPrestauthenabled: "true"restuser: "admin"secretNamespace: "default"secretName: "heketi-secret"volumetype: "replicate:2" #使用2个副本卷# 生成secret资源,其中"key"值需要转换为base64编码格式
[root@master01 heketi]# echo -n "admin" | base64
YWRtaW5AMTIz# 注意name/namespace与storageclass资源中定义一致;
# 密码必须有“kubernetes.io/glusterfs” type
[root@kubenode1 heketi]# cat heketi-secret.yaml
apiVersion: v1
kind: Secret
metadata:name: heketi-secretnamespace: default #默认命名空间
data:# base64 encoded password. E.g.: echo -n "mypassword" | base64key: YWRtaW5AMTIz
type: kubernetes.io/glusterfs# 创建secret资源
[root@master01 heketi]# kubectl create -f heketi-secret.yaml # 创建storageclass资源;
# 注意:storageclass资源创建后不可变更,如修改只能删除后重建
[root@master01 heketi]# kubectl create -f gluster-heketi-storageclass.yaml# 查看storageclass资源
[root@master01 heketi]# kubectl describe storageclass gluster-heketi-storageclass
Name: gluster-heketi-storageclass
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/glusterfs
Parameters: restauthenabled=true,resturl=http://192.168.200.182:8080,restuser=admin,secretName=heketi-secret,secretNamespace=default,volumetype=replicate:2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>3.定义PVC
1)定义PVC
# 注意“storageClassName”的对应关系
[root@master01 heketi]# vim gluster-heketi-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: gluster-heketi-pvc
spec:storageClassName: gluster-heketi-storageclass# ReadWriteOnce:简写RWO,读写权限,且只能被单个node挂载;# ReadOnlyMany:简写ROX,只读权限,允许被多个node挂载;# ReadWriteMany:简写RWX,读写权限,允许被多个node挂载;accessModes:- ReadWriteOnceresources:requests:# 注意格式,不能写“GB”storage: 1Gi# 创建pvc资源
[root@kubenode1 heketi]# kubectl create -f gluster-heketi-pvc.yaml2)查看K8S资源
# 查看PVC,状态为”Bound”;
# “Capacity”为2G,是因为同步创建meta数据
[root@master01 heketi]# kubectl describe pvc gluster-heketi-pvc
Name: gluster-heketi-pvc
Namespace: default
StorageClass: gluster-heketi-storageclass
Status: Bound
Volume: pvc-00973ed2-103c-11ea-ab3c-000c298bdf45
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yespv.kubernetes.io/bound-by-controller=yesvolume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi #k8s节点与gluster01共用一个,所以是1G。
Access Modes: RWO
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal ProvisioningSucceeded 1m persistentvolume-controller Successfully provisioned volume pvc-00973ed2-103c-11ea-ab3c-000c298bdf45 using kubernetes.io/glusterfs# 查看PV详细信息,除容量,引用storageclass信息,状态,回收策略等外,同时给出GlusterFS的Endpoint与path;
[root@master01 heketi]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-00973ed2-103c-11ea-ab3c-000c298bdf45 1Gi RWO Delete Bound default/gluster-heketi-pvc gluster-heketi-storageclass 5m
[root@master01 heketi]# kubectl describe pv pvc-00973ed2-103c-11ea-ab3c-000c298bdf45
Name: pvc-00973ed2-103c-11ea-ab3c-000c298bdf45
Labels: <none>
Annotations: Description=Gluster-Internal: Dynamically provisioned PVgluster.kubernetes.io/heketi-volume-id=de24f0db9fb4f803cffe4096a6e0dcb2gluster.org/type=filekubernetes.io/createdby=heketi-dynamic-provisionerpv.beta.kubernetes.io/gid=2000pv.kubernetes.io/bound-by-controller=yespv.kubernetes.io/provisioned-by=kubernetes.io/glusterfsvolume.beta.kubernetes.io/mount-options=auto_unmount
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gluster-heketi-storageclass
Status: Bound
Claim: default/gluster-heketi-pvc
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)EndpointsName: glusterfs-dynamic-gluster-heketi-pvcPath: vol_de24f0db9fb4f803cffe4096a6e0dcb2ReadOnly: false
Events: <none># 查看endpoints资源,可以从pv信息中获取,固定格式:glusterfs-dynamic-PVC_NAME;
# endpoints资源中指定了挂载存储时的具体地址#EndpointsName: glusterfs-dynamic-gluster-heketi-pvc
[root@master01 heketi]# kubectl describe endpoints glusterfs-dynamic-gluster-heketi-pvc
Name: glusterfs-dynamic-gluster-heketi-pvc
Namespace: default
Labels: gluster.kubernetes.io/provisioned-for-pvc=gluster-heketi-pvc
Annotations: <none>
Subsets:Addresses: 192.168.200.182,192.168.200.183,192.168.200.184 #gluster集群挂载存储时的具体地址NotReadyAddresses: <none>Ports:Name Port Protocol---- ---- --------<unset> 1 TCPEvents: <none>3)查看heketi
# volume与brick已经创建;
# 主挂载点(通信)在glusterfs03节点,其余两个节点备选;
# 两副本的情况下,glusterfs01节点并未创建brick
[root@master01 heketi]# heketi-cli --user admin --secret admin topology infoCluster Id: 6814c5edd73936a4447f0516a2886a59File: trueBlock: trueVolumes:Name: vol_de24f0db9fb4f803cffe4096a6e0dcb2Size: 1Id: de24f0db9fb4f803cffe4096a6e0dcb2Cluster Id: 6814c5edd73936a4447f0516a2886a59Mount: 192.168.200.184:vol_de24f0db9fb4f803cffe4096a6e0dcb2Mount Options: backup-volfile-servers=192.168.200.182,192.168.200.183Durability Type: replicateReplica: 2Snapshot: EnabledSnapshot Factor: 1.00Bricks:Id: 3d20fde877aef8aaf00d03775bc8b617Path: /var/lib/heketi/mounts/vg_b3bdfa6f860359cdc7bce973565f9af8/brick_3d20fde877aef8aaf00d03775bc8b617/brickSize (GiB): 1Node: 1629d8d3fe1f562a1d5efedfc12159d0Device: b3bdfa6f860359cdc7bce973565f9af8Id: 49dfa4fe283614a5caf49529127e46a1Path: /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1/brickSize (GiB): 1Node: bc3610fa65c82c489a993974dc6823c0Device: c834eb67bea5bd73142bde1d71d3167cNodes:Node Id: 1629d8d3fe1f562a1d5efedfc12159d0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 3Management Hostnames: 192.168.200.184Storage Hostnames: 192.168.200.184Devices:Id:b3bdfa6f860359cdc7bce973565f9af8 Name:/dev/sdb State:online Size (GiB):19 Used (GiB):1 Free (GiB):18 Bricks:Id:3d20fde877aef8aaf00d03775bc8b617 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_b3bdfa6f860359cdc7bce973565f9af8/brick_3d20fde877aef8aaf00d03775bc8b617/brickNode Id: 238052ce88248e833f0a9b10f8483481State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 1Management Hostnames: 192.168.200.182Storage Hostnames: 192.168.200.182Devices:Id:af8ba5c90b782e7762071b800f669e7c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: bc3610fa65c82c489a993974dc6823c0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 2Management Hostnames: 192.168.200.183Storage Hostnames: 192.168.200.183Devices:Id:c834eb67bea5bd73142bde1d71d3167c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):1 Free (GiB):18 Bricks:Id:49dfa4fe283614a5caf49529127e46a1 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1/brick4)查看GlusterFS节点
#以glusterfs01(node01)节点为例
[root@node01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm /└─centos-swap 253:1 0 2G 0 lvm
sdb 8:16 0 20G 0 disk
├─vg_c834eb67bea5bd73142bde1d71d3167c-tp_49dfa4fe283614a5caf49529127e46a1_tmeta 253:2 0 8M 0 lvm
│ └─vg_c834eb67bea5bd73142bde1d71d3167c-tp_49dfa4fe283614a5caf49529127e46a1-tpool 253:4 0 1G 0 lvm
│ ├─vg_c834eb67bea5bd73142bde1d71d3167c-tp_49dfa4fe283614a5caf49529127e46a1 253:5 0 1G 0 lvm
│ └─vg_c834eb67bea5bd73142bde1d71d3167c-brick_49dfa4fe283614a5caf49529127e46a1 253:6 0 1G 0 lvm /var/lib/heketi/moun
└─vg_c834eb67bea5bd73142bde1d71d3167c-tp_49dfa4fe283614a5caf49529127e46a1_tdata 253:3 0 1G 0 lvm └─vg_c834eb67bea5bd73142bde1d71d3167c-tp_49dfa4fe283614a5caf49529127e46a1-tpool 253:4 0 1G 0 lvm ├─vg_c834eb67bea5bd73142bde1d71d3167c-tp_49dfa4fe283614a5caf49529127e46a1 253:5 0 1G 0 lvm └─vg_c834eb67bea5bd73142bde1d71d3167c-brick_49dfa4fe283614a5caf49529127e46a1 253:6 0 1G 0 lvm /var/lib/heketi/moun
sr0 11:0 1 4.2G 0 rom
[root@node01 ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 17G 2.5G 15G 15% /
devtmpfs devtmpfs 899M 0 899M 0% /dev
tmpfs tmpfs 911M 0 911M 0% /dev/shm
tmpfs tmpfs 911M 9.7M 902M 2% /run
tmpfs tmpfs 911M 0 911M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 142M 873M 14% /boot
tmpfs tmpfs 183M 0 183M 0% /run/user/0
tmpfs tmpfs 911M 12K 911M 1% /var/lib/kubelet/pods/a3b15bc5-0f57-11ea-adf4-000c298bdf45/volumes/kubernetes.io~secret/kube-proxy-token-pzqn2
tmpfs tmpfs 911M 12K 911M 1% /var/lib/kubelet/pods/a3b132ce-0f57-11ea-adf4-000c298bdf45/volumes/kubernetes.io~secret/flannel-token-8df6v
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/8f6b5e13fc8f1577b6d9ea8ba276665318249500081b6ab48ddbb59e96e5f422/merged
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/33c38dc976ca4ce66d798a5b069220ef6dcfb772bb2352352c0cdac6688412f1/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/5e77ee3784556617f13e7a1d0ba55575f2a4d0520554c166f88366b14ef3737a/shm
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/dd109cad0d23cf7e249b5315216488110e11f04e37e379066393197bfe18c213/shm
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/478d4c24fb88df7d6f9f49824cae02ef26394d81406b38c12c26a74b1640b214/merged
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/6989b9afe1f18e38af7964773f12cedd2fd84710cf4de2d7bc66a2078a8034aa/merged
/dev/mapper/vg_c834eb67bea5bd73142bde1d71d3167c-brick_49dfa4fe283614a5caf49529127e46a1 xfs 1014M 33M 982M 4% /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1# 查看volume的具体信息:2副本的replicate卷;
# 另有"vgscan","vgdisplay"也可查看逻辑卷组信息等
[root@master01 ~]# gluster volume list
vol_de24f0db9fb4f803cffe4096a6e0dcb2
[root@master01 ~]# gluster volume info vol_de24f0db9fb4f803cffe4096a6e0dcb2Volume Name: vol_de24f0db9fb4f803cffe4096a6e0dcb2
Type: Replicate
Volume ID: 9c21099a-bad8-4ad1-bcec-f32813358c4e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.200.184:/var/lib/heketi/mounts/vg_b3bdfa6f860359cdc7bce973565f9af8/brick_3d20fde877aef8aaf00d03775bc8b617/brick
Brick2: 192.168.200.183:/var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off4.Pod挂载存储资源
# 设置1个volume被pod引用,volume的类型为”persistentVolumeClaim”
[root@master01 heketi]# vim gluster-heketi-pod.yaml
kind: Pod
apiVersion: v1
metadata:name: gluster-heketi-pod
spec:containers:- name: gluster-heketi-containerimage: busyboxcommand:- sleep- "3600"volumeMounts:- name: gluster-heketi-volumemountPath: "/pv-data"readOnly: falsevolumes:- name: gluster-heketi-volumepersistentVolumeClaim:claimName: gluster-heketi-pvc# 创建pod
[root@master01 heketi]# kubectl apply -f gluster-heketi-pod.yaml
pod/gluster-heketi-pod created5.验证
# 在容器的挂载目录中创建文件
[root@master01 heketi]# kubectl exec -it gluster-heketi-pod /bin/sh
/ # cd /pv-data/
/pv-data # echo "welcome to a" >> a.txt
/pv-data # echo "welcome to b" >> b.txt
/pv-data # ls
a.txt b.txt
/pv-data # # 在GlusterFS节点对应挂载目录查看创建的文件;
# 挂载目录通过“df -Th”或“lsblk”获取
[root@node01 ~]# df -hT #因为volume与brick创建在了node01
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 17G 2.5G 15G 15% /
devtmpfs devtmpfs 899M 0 899M 0% /dev
tmpfs tmpfs 911M 0 911M 0% /dev/shm
tmpfs tmpfs 911M 9.7M 902M 2% /run
tmpfs tmpfs 911M 0 911M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 142M 873M 14% /boot
tmpfs tmpfs 183M 0 183M 0% /run/user/0
tmpfs tmpfs 911M 12K 911M 1% /var/lib/kubelet/pods/a3b15bc5-0f57-11ea-adf4-000c298bdf45/volumes/kubernetes.io~secret/kube-proxy-token-pzqn2
tmpfs tmpfs 911M 12K 911M 1% /var/lib/kubelet/pods/a3b132ce-0f57-11ea-adf4-000c298bdf45/volumes/kubernetes.io~secret/flannel-token-8df6v
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/8f6b5e13fc8f1577b6d9ea8ba276665318249500081b6ab48ddbb59e96e5f422/merged
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/33c38dc976ca4ce66d798a5b069220ef6dcfb772bb2352352c0cdac6688412f1/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/5e77ee3784556617f13e7a1d0ba55575f2a4d0520554c166f88366b14ef3737a/shm
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/dd109cad0d23cf7e249b5315216488110e11f04e37e379066393197bfe18c213/shm
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/478d4c24fb88df7d6f9f49824cae02ef26394d81406b38c12c26a74b1640b214/merged
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/6989b9afe1f18e38af7964773f12cedd2fd84710cf4de2d7bc66a2078a8034aa/merged
/dev/mapper/vg_c834eb67bea5bd73142bde1d71d3167c-brick_49dfa4fe283614a5caf49529127e46a1 xfs 1014M 33M 982M 4% /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1
[root@node01 ~]# cd /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1/brick/
[root@node01 brick]# ls
a.txt b.txt
[root@node01 brick]# cat a.txt
welcome to a
[root@node01 brick]# cat b.txt
welcome to b#宿主机挂载测试
[root@master01 heketi]# heketi-cli --user admin --secret admin topology infoCluster Id: 6814c5edd73936a4447f0516a2886a59File: trueBlock: trueVolumes:Name: vol_de24f0db9fb4f803cffe4096a6e0dcb2Size: 1Id: de24f0db9fb4f803cffe4096a6e0dcb2Cluster Id: 6814c5edd73936a4447f0516a2886a59Mount: 192.168.200.184:vol_de24f0db9fb4f803cffe4096a6e0dcb2Mount Options: backup-volfile-servers=192.168.200.182,192.168.200.183Durability Type: replicateReplica: 2Snapshot: EnabledSnapshot Factor: 1.00Bricks:Id: 3d20fde877aef8aaf00d03775bc8b617Path: /var/lib/heketi/mounts/vg_b3bdfa6f860359cdc7bce973565f9af8/brick_3d20fde877aef8aaf00d03775bc8b617/brickSize (GiB): 1Node: 1629d8d3fe1f562a1d5efedfc12159d0Device: b3bdfa6f860359cdc7bce973565f9af8Id: 49dfa4fe283614a5caf49529127e46a1Path: /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1/brickSize (GiB): 1Node: bc3610fa65c82c489a993974dc6823c0Device: c834eb67bea5bd73142bde1d71d3167cNodes:Node Id: 1629d8d3fe1f562a1d5efedfc12159d0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 3Management Hostnames: 192.168.200.184Storage Hostnames: 192.168.200.184Devices:Id:b3bdfa6f860359cdc7bce973565f9af8 Name:/dev/sdb State:online Size (GiB):19 Used (GiB):1 Free (GiB):18 Bricks:Id:3d20fde877aef8aaf00d03775bc8b617 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_b3bdfa6f860359cdc7bce973565f9af8/brick_3d20fde877aef8aaf00d03775bc8b617/brickNode Id: 238052ce88248e833f0a9b10f8483481State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 1Management Hostnames: 192.168.200.182Storage Hostnames: 192.168.200.182Devices:Id:af8ba5c90b782e7762071b800f669e7c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: bc3610fa65c82c489a993974dc6823c0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 2Management Hostnames: 192.168.200.183Storage Hostnames: 192.168.200.183Devices:Id:c834eb67bea5bd73142bde1d71d3167c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):1 Free (GiB):18 Bricks:Id:49dfa4fe283614a5caf49529127e46a1 Size (GiB):1 Path: /var/lib/heketi/mounts/vg_c834eb67bea5bd73142bde1d71d3167c/brick_49dfa4fe283614a5caf49529127e46a1/brick#挂载方式
Mount: 192.168.200.184:vol_de24f0db9fb4f803cffe4096a6e0dcb2#挂载测试
[root@master01 ~]# mkdir -p /data
[root@master01 ~]# mount -t glusterfs Mount: 192.168.200.184:vol_de24f0db9fb4f803cffe4096a6e0dcb2
[root@master01 ~]# ls /data/
a.txt b.txt
[root@master01 ~]# cat /data/a.txt
welcome to a
[root@master01 ~]# cat /data/b.txt
welcome to b#测试Deployments
#创建
[root@master01 ~]# cat nginx-gluster.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: nginx-gfs
spec:replicas: 2template:metadata:labels:name: nginxspec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80volumeMounts:- name: nginx-gfs-htmlmountPath: "/usr/share/nginx/html"- name: nginx-gfs-confmountPath: "/etc/nginx/conf.d"volumes:- name: nginx-gfs-htmlpersistentVolumeClaim:claimName: glusterfs-nginx-html- name: nginx-gfs-confpersistentVolumeClaim:claimName: glusterfs-nginx-conf
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: glusterfs-nginx-html
spec:accessModes: [ "ReadWriteMany" ]storageClassName: "gluster-heketi-storageclass"resources:requests:storage: 500Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: glusterfs-nginx-conf
spec:accessModes: [ "ReadWriteMany" ]storageClassName: "gluster-heketi-storageclass"resources:requests:storage: 10Mi[root@master01 ~]# kubectl create -f nginx-gluster.yaml
deployment.extensions/nginx-gfs created
persistentvolumeclaim/glusterfs-nginx-html created
persistentvolumeclaim/glusterfs-nginx-conf created#查看
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
gluster-heketi-pod 1/1 Running 0 40m
nginx-gfs-77c758ccc-ctmfd 1/1 Running 0 1m
nginx-gfs-77c758ccc-t5s4c 1/1 Running 0 1m
[root@master01 ~]# kubectl exec -it nginx-gfs-77c758ccc-ctmfd /bin/sh
# df -hT
Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 17G 2.7G 15G 16% /
tmpfs tmpfs 911M 0 911M 0% /dev
tmpfs tmpfs 911M 0 911M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 17G 2.7G 15G 16% /etc/hosts
shm tmpfs 64M 0 64M 0% /dev/shm
192.168.200.182:vol_a1dcc657ec3ec70af74f00f3bfab1558 fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d
192.168.200.182:vol_46ead0a06d0842e4bc2c7c6ea0ae0428 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html
tmpfs tmpfs 911M 12K 911M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs tmpfs 911M 0 911M 0% /sys/firmware
# exit
[root@master01 ~]# kubectl exec -it nginx-gfs-77c758ccc-t5s4c -- df -hT
Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 17G 2.7G 15G 16% /
tmpfs tmpfs 911M 0 911M 0% /dev
tmpfs tmpfs 911M 0 911M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 17G 2.7G 15G 16% /etc/hosts
shm tmpfs 64M 0 64M 0% /dev/shm
192.168.200.182:vol_a1dcc657ec3ec70af74f00f3bfab1558 fuse.glusterfs 1014M 43M 972M 5% /etc/nginx/conf.d
192.168.200.182:vol_46ead0a06d0842e4bc2c7c6ea0ae0428 fuse.glusterfs 1014M 43M 972M 5% /usr/share/nginx/html
tmpfs tmpfs 911M 12K 911M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs tmpfs 911M 0 911M 0% /sys/firmware[root@master01 ~]# mkdir -p /ceshi
[root@master01 ~]# mount -t glusterfs 192.168.200.182:vol_46ead0a06d0842e4bc2c7c6ea0ae0428 /ceshi/
[root@master01 ~]# ls /ceshi/
[root@master01 ~]# echo -e "`date`\nwelcome to nginx-glusterfs" >> /ceshi/index.html
[root@master01 ~]# cat /ceshi/index.html
2019年 11月 27日 星期三 11:15:55 CST
welcome to nginx-glusterfs
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
gluster-heketi-pod 1/1 Running 0 50m
nginx-gfs-77c758ccc-ctmfd 1/1 Running 0 10m
nginx-gfs-77c758ccc-t5s4c 1/1 Running 0 10m
[root@master01 ~]# kubectl exec -it nginx-gfs-77c758ccc-ctmfd -- cat /usr/share/nginx/html/index.html
2019年 11月 27日 星期三 11:15:55 CST
welcome to nginx-glusterfs#扩容nginx
#查看deploy
[root@master01 ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-gfs 2 2 2 2 15m#更改副本集为三个
[root@master01 ~]# kubectl scale deployment nginx-gfs --replicas 3
deployment.extensions/nginx-gfs scaled
[root@master01 ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-gfs 3 3 3 3 14m
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
gluster-heketi-pod 1/1 Running 0 56m
nginx-gfs-77c758ccc-86c62 1/1 Running 0 2m #新创建的
nginx-gfs-77c758ccc-ctmfd 1/1 Running 0 16m
nginx-gfs-77c758ccc-t5s4c 1/1 Running 0 16m
#查看新创建的pod是否成功
[root@master01 ~]# kubectl exec -it nginx-gfs-77c758ccc-86c62 -- cat /usr/share/nginx/html/index.html
2019年 11月 27日 星期三 11:15:55 CST
welcome to nginx-glusterfs#扩容glusterFS
.......
.......6. 验证StorageClass的ReclaimPolicy
# 删除Pod应用后,再删除pvc
[root@master01 heketi]# kubectl delete -f gluster-heketi-pod.yaml
pod "gluster-heketi-pod" deleted
[root@master01 heketi]# kubectl delete -f gluster-heketi-pvc.yaml
persistentvolumeclaim "gluster-heketi-pvc" deleted
[root@master01 heketi]# kubectl get pvc
No resources found.
[root@master01 heketi]# kubectl get pv
No resources found.
[root@master01 heketi]# kubectl get endpoints
NAME ENDPOINTS AGE
glusterfs-dynamic-gluster-heketi-pvc 192.168.200.182:1,192.168.200.183:1,192.168.200.184:1 32m
kubernetes 192.168.200.182:6443 1d#heketi
[root@master01 ~]# heketi-cli --user admin --secret admin topology infoCluster Id: 6814c5edd73936a4447f0516a2886a59File: trueBlock: trueVolumes:Nodes:Node Id: 1629d8d3fe1f562a1d5efedfc12159d0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 3Management Hostnames: 192.168.200.184Storage Hostnames: 192.168.200.184Devices:Id:b3bdfa6f860359cdc7bce973565f9af8 Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: 238052ce88248e833f0a9b10f8483481State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 1Management Hostnames: 192.168.200.182Storage Hostnames: 192.168.200.182Devices:Id:af8ba5c90b782e7762071b800f669e7c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: bc3610fa65c82c489a993974dc6823c0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 2Management Hostnames: 192.168.200.183Storage Hostnames: 192.168.200.183Devices:Id:c834eb67bea5bd73142bde1d71d3167c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:
[root@master01 ~]#
[root@master01 ~]# kubectl get pv
No resources found.
[root@master01 ~]# kubectl get pvc
No resources found.
[root@master01 ~]# heketi-cli --user admin --secret admin topology infoCluster Id: 6814c5edd73936a4447f0516a2886a59File: trueBlock: trueVolumes:Nodes:Node Id: 1629d8d3fe1f562a1d5efedfc12159d0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 3Management Hostnames: 192.168.200.184Storage Hostnames: 192.168.200.184Devices:Id:b3bdfa6f860359cdc7bce973565f9af8 Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: 238052ce88248e833f0a9b10f8483481State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 1Management Hostnames: 192.168.200.182Storage Hostnames: 192.168.200.182Devices:Id:af8ba5c90b782e7762071b800f669e7c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:Node Id: bc3610fa65c82c489a993974dc6823c0State: onlineCluster Id: 6814c5edd73936a4447f0516a2886a59Zone: 2Management Hostnames: 192.168.200.183Storage Hostnames: 192.168.200.183Devices:Id:c834eb67bea5bd73142bde1d71d3167c Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:#GlusterFS节点
[root@node01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm /└─centos-swap 253:1 0 2G 0 lvm
sdb 8:16 0 20G 0 disk
sr0 11:0 1 4.2G 0 rom
[root@node01 ~]# df -hT
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 17G 2.5G 15G 15% /
devtmpfs devtmpfs 899M 0 899M 0% /dev
tmpfs tmpfs 911M 0 911M 0% /dev/shm
tmpfs tmpfs 911M 9.7M 902M 2% /run
tmpfs tmpfs 911M 0 911M 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 142M 873M 14% /boot
tmpfs tmpfs 183M 0 183M 0% /run/user/0
tmpfs tmpfs 911M 12K 911M 1% /var/lib/kubelet/pods/a3b15bc5-0f57-11ea-adf4-000c298bdf45/volumes/kubernetes.io~secret/kube-proxy-token-pzqn2
tmpfs tmpfs 911M 12K 911M 1% /var/lib/kubelet/pods/a3b132ce-0f57-11ea-adf4-000c298bdf45/volumes/kubernetes.io~secret/flannel-token-8df6v
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/8f6b5e13fc8f1577b6d9ea8ba276665318249500081b6ab48ddbb59e96e5f422/merged
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/33c38dc976ca4ce66d798a5b069220ef6dcfb772bb2352352c0cdac6688412f1/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/5e77ee3784556617f13e7a1d0ba55575f2a4d0520554c166f88366b14ef3737a/shm
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/dd109cad0d23cf7e249b5315216488110e11f04e37e379066393197bfe18c213/shm
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/478d4c24fb88df7d6f9f49824cae02ef26394d81406b38c12c26a74b1640b214/merged
overlay overlay 17G 2.5G 15G 15% /var/lib/docker/overlay/6989b9afe1f18e38af7964773f12cedd2fd84710cf4de2d7bc66a2078a8034aa/merged
[root@node01 ~]# gluster volume list
No volumes present in cluster
Kubernetes存储之Heketi管理GlusterFS相关推荐
- Kubernetes - - k8s - v1.12.3 动态存储管理GlusterFS及使用Heketi扩容GlusterFS集群
1,准备工作 1.1 所有节点安装GFS客户端 yum install glusterfs glusterfs-fuse -y 1.2 如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标 ...
- KUBERNETES存储之PERSISTENT VOLUMES简介
KUBERNETES存储之PERSISTENT VOLUMES简介 简介 管理存储和管理计算有着明显的不同.PersistentVolume子系统给用户和管理员提供了一套API,从而抽象出存储是如何提 ...
- kubernetes存储:local,openEBS,rook ceph
文章目录 Local 存储(PV) 概念 hostPath Local PV storageClassName指定延迟绑定动作 pv的删除流程 OpenEBS存储 控制平面 OpenEBS PV Pr ...
- kubernetes——存储之Volumes配置管理
kubernetes--存储之Volumes配置管理 一.Volumes的简介 二.emptyDir卷 1.emptyDir的引入 2.emptyDir 的使用场景 3.多容器共享volumes 4. ...
- kubernetes API Server 权限管理实践
2019独角兽企业重金招聘Python工程师标准>>> kubernetes API Server 权限管理实践 API Server权限控制方式介绍 API Server权限控制分 ...
- 课时 21:Kubernetes 存储架构及插件使用(阚俊宝)
本文将主要分享以下三方面的内容: Kubernetes 存储体系架构: Flexvolume 介绍及使用: CSI 介绍及使用. Kubernetes 存储体系架构 引例: 在 Kubernetes ...
- 从零开始入门 K8s | Kubernetes 存储架构及插件使用
作者 | 阚俊宝 阿里巴巴高级技术专家 本文整理自<CNCF x Alibaba 云原生技术公开课>第 21 讲. 关注"阿里巴巴云原生"公众号,回复关键词**&quo ...
- 阿里云上万个 Kubernetes 集群大规模管理实践
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 | 汤志敏,阿里 ...
- 第7章:Kubernetes存储
Kubernetes存储 1.为什么需要存储卷? 容器部署过程中一般有以下三种数据: ·启动时需要的初始数据,可以是配置文件 ·启动过程中产生的临时数据,该临时数据需要多个容器间共享 ·启动过程中产生 ...
最新文章
- vc++ 显式链接dll
- 《Cloud Native Infrastructure》CHAPTER 7 (1)
- 河北计算机科学与技术研究生,2021年河北工业大学计算机科学与技术(081200)硕士研究生招生信息_考研招生计划和招生人数 - 学途吧...
- assert函数_PHP 之 assert()函数
- 【kafka】 kafka如何设置指定分区进行发送和消费
- VS C++ 从字符串中查找字符最后一次出现的位置 strrchr
- 重新理解创业:一个创业者的途中思考
- 计算机联锁控制系统的软件应具备信号操作功能,N6_计算机联锁控制系统原理-软件原理.ppt...
- ArcGis 地理配准注意事项
- http协议与tcp协议区别
- BUAA-2023软件工程——团队成员介绍
- 不同风格吉他曲目收录
- OAuth2.0 - 自定义模式授权 - 短信验证码登录
- Bootstrap导航栏下拉菜单不生效的问题
- passive-interface 总结整理
- 谈分答商业模式中的收入模式
- FindBugs NN_NAKED_NOTIFY
- 36氪发布《2021年中国电子签名行业研究报告》,法大大成行业头部代表
- MacOS 安装 Kettle(Data Integration)
- windows 7的瘦身版