K8S 存储之PV PVC以及StorageClass
PV是一个全局概念,所有命名空间可见。管理员创建PV后,用户可创建PVC与之关联并在pod中使用。一个PV只可与一个PVC进行关联。
#创建基于nfs的PV[root@k8s-master-01 volumeTest]# vim pvnfs.yaml[root@k8s-master-01 volumeTest]# lspod1.yaml podEmptyDir.yaml podHostPath.yaml podNfs.yaml pvnfs.yaml[root@k8s-master-01 volumeTest]# more pvnfs.yamlapiVersion: v1kind: PersistentVolumemetadata: name: pvnfs001spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce #回收策略,删除PVC后,PV的状态会释放,变为available且将对应数据删除,使用容器busybox清理,如果策略为Retain,数据不会被删除,且PV状态为released。 persistentVolumeReclaimPolicy: Recycle #storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /nfs server: 192.168.71.134[root@k8s-master-01 volumeTest]# kubectl apply -f pvnfs.yamlpersistentvolume/pvnfs001 created#查看PV状态。[root@k8s-master-01 volumeTest]# kubectl get pvNAMECAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvnfs001 5Gi RWO Recycle Available5s##创建PVC,PVC中不包含与PV相关的定义,而是通过accessModes的一致性、PVC要求容量大小<=PV所能提供的大小以及storageClassName的一致性这些参数来决定是否可关联。[root@k8s-master-01 volumeTest]# more pvcnfs.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvcnfs001spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 1Gi #storageClassName: slow #selector: # matchLabels: # release: "stable" # matchExpressions: # - {key: environment, operator: In, values: [dev]}[root@k8s-master-01 volumeTest]# kubectl apply -f pvcnfs.yamlpersistentvolumeclaim/pvcnfs001 created[root@k8s-master-01 volumeTest]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvcnfs001 Bound pvnfs001 5Gi RWO 4s#创建一个使用pvcnfs001这个PVC作为存储的pod[root@k8s-master-01 volumeTest]# kubectl apply -f podPvc.yamlpod/www04 created[root@k8s-master-01 volumeTest]# cat podPvc.yamlapiVersion: v1kind: Podmetadata: creationTimestamp: null labels: run: www04 name: www04spec: volumes: - name: pvcnfs persistentVolumeClaim: claimName: pvcnfs001 containers: - image: nginx imagePullPolicy: IfNotPresent name: www04 volumeMounts: - name: pvcnfs mountPath: /data dnsPolicy: ClusterFirst restartPolicy: Alwaysstatus: {}#进入容器查看存储情况[root@k8s-master-01 volumeTest]# kubectl exec -it www04 -- bashroot@www04:/# df -hFilesystem Size Used Avail Use% Mounted onoverlay 17G 8.0G 9.0G 48% /tmpfs64M 0 64M 0% /devtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup192.168.71.134:/nfs17G 8.0G 9.0G 48% /data/dev/mapper/centos-root 17G 8.0G 9.0G 48% /etc/hostsshm 64M 0 64M 0% /dev/shmtmpfs 7.6G 12K 7.6G 1% /run/secrets/kubernetes.io/serviceaccounttmpfs 3.9G 0 3.9G 0% /proc/acpitmpfs 3.9G 0 3.9G 0% /proc/scsitmpfs 3.9G 0 3.9G 0% /sys/firmware
StorageClass
StorageClass需要先具备相应的分配器provisioner,而不需要
#修改api-server配置文件使其支持动态storageclass,加上如下参数并重启kubelet服务#[root@k8s-master-01 volumeTest]# vim /etc/kubernetes/manifests/kube-apiserver.yaml#[root@k8s-master-01 volumeTest]# systemctl restart kubelet#- --feature-gates=RemoveSelfLink=false#准备好环境,将镜像拷贝到各节点并加载[root@k8s-master-01 volumeTest]# scp nfs-client-provisioner.tar root@192.168.71.134:~[root@k8s-node-01 ~]# lsanaconda-ks.cfg app01 nfs-client-provisioner.tar[root@k8s-node-01 ~]# docker load -i nfs-client-provisioner.tar#进入master节点编辑nfs provisioner的配置文件[root@k8s-master-01 deploy]# pwd/volumeTest/external-storage-master/nfs-client/deploy[root@k8s-master-01 deploy]# lsclass.yaml deployment-arm.yaml deployment.yaml objects rbac.yaml test-claim.yaml test-pod.yaml#修改deployment.yaml文件,将NFS设置改为自己的NFS服务设置,且修改镜像策略为IfNotPresent#应用rbac.yaml以及depoloyement.yaml文件#查看pods状态,可以看到分配器已通过pod方式创建[root@k8s-master-01 deploy]# kubectl get pods -n defaultNAME READY STATUS RESTARTS AGEnfs-client-provisioner-7bcffc97db-c2thb 1/1 Running 0 86s#查看storageclass,可简写为sc[root@k8s-master-01 deploy]# kubectl get storageclassNo resources found#编辑sc声明文件并应用[root@k8s-master-01 deploy]# vim nfsSc.yaml[root@k8s-master-01 deploy]# cat nfsSc.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: nfsscprovisioner: fuseim.pri/ifs #与deployment中声明的名称一致parameters: archiveOnDelete: "false"[root@k8s-master-01 volumeTest]# kubectl apply -f nfsSc.yamlstorageclass.storage.k8s.io/nfssc created[root@k8s-master-01 volumeTest]# kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEnfssc fuseim.pri/ifs Delete Immediate false 29s#创建PVC,storageClassName知名为创建了的sc的名字[root@k8s-master-01 volumeTest]# cat pvcnfssc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvcnfs001spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 1Gi storageClassName: nfssc[root@k8s-master-01 volumeTest]# kubectl apply -f pvcnfssc.yamlpersistentvolumeclaim/pvcnfs001 created#查看已创建的pvc[root@k8s-master-01 volumeTest]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvcnfs001 Bound pvc-849b63f8-2724-47ca-90cf-7f11c09176cb 1Gi RWO nfssc 5s#查看创建的PV[root@k8s-master-01 volumeTest]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-849b63f8-2724-47ca-90cf-7f11c09176cb 1Gi RWO Delete Bound app01/pvcnfs001 nfssc 86s[root@k8s-master-01 volumeTest]# kubectl describe pv pvc-849b63f8-2724-47ca-90cf-7f11c09176cbName: pvc-849b63f8-2724-47ca-90cf-7f11c09176cbLabels: Annotations: pv.kubernetes.io/provisioned-by: fuseim.pri/ifsFinalizers: [kubernetes.io/pv-protection]StorageClass: nfsscStatus: BoundClaim: app01/pvcnfs001Reclaim Policy: DeleteAccess Modes: RWOVolumeMode: FilesystemCapacity: 1GiNode Affinity: Message:Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.71.134 Path: /nfs/app01-pvcnfs001-pvc-849b63f8-2724-47ca-90cf-7f11c09176cb ReadOnly: falseEvents:
预先创建PV。