侧边栏壁纸
博主头像
一揽芳华 博主等级

行动起来,活在当下

  • 累计撰写 265 篇文章
  • 累计创建 24 个标签
  • 累计收到 4 条评论

目 录CONTENT

文章目录

八、kubernetes持久化存储

芳华是个男孩!
2024-10-15 / 0 评论 / 0 点赞 / 11 阅读 / 0 字
广告 广告

一、存储卷介绍、分类及选择

1、存储卷介绍

pod有生命周期,生命周期结束后pod里的数据会消失(如配置文件,业务数据等)。

解决: 我们需要将数据与pod分离,将数据放在专门的存储卷上

pod在k8s集群的节点中是可以调度的, 如果pod挂了被调度到另一个节点,那么数据和pod的联系会中断。

解决: 所以我们需要与集群节点分离的存储系统才能实现数据持久化

简单来说: volume提供了在容器上挂载外部存储的能力

2、存储卷的分类

  • 本地存储卷
    • emptyDir pod删除,数据也会被清除, 用于数据的临时存储
    • hostPath 宿主机目录映射(本地存储卷)
  • 网络存储卷
    • NAS类 nfs等
    • SAN类 iscsi,FC等
    • 分布式存储 glusterfs,cephfs,rbd,cinder等
    • 云存储 aws,azurefile等

3、存储卷的选择

按应用角度主要分为三类:

  • 文件存储 如:nfs、glusterfs、cephfs等
    • 优点: 数据共享(多pod挂载可以同读同写)
    • 缺点: 性能较差
  • 块存储 如: iscsi、rbd等
    • 优点: 性能相对于文件存储好
    • 缺点: 不能实现数据共享(部分)
  • 对象存储 如: ceph对象存储
    • 优点: 性能好,数据共享
    • 缺点: 使用方式特殊,支持较少

面对kubernetes支持的形形色色的存储卷,如何选择成了难题。在选择存储时,我们要抓住核心需求:

  • 数据是否需要持久性
  • 数据可靠性 如存储集群节点是否有单点故障,数据是否有副本等
  • 性能
  • **扩展性 ** 如是否能方便扩容,应对数据增长的需求
  • 运维难度 存储的运维难度是比较高的,尽量选择稳定的开源方案或商业 产品
  • 成本

总之, 存储的选择是需要考虑很多因素的, 熟悉各类存储产品, 了解它们的优缺点,结合自身需求才能选择合适自己的。

二、本地存储卷之emptyDir

1、场景和特点

  • 应用场景 :实现pod内容器之间数据共享
  • 特点 :随着pod被删除,该卷也会被删除

2、实例

2.1、创建yaml文件

cat <<EOF> volmume-emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
  name: volume-emptydir
  namespace: test
spec:
  containers:
  - name: write
    image: centos:centos7
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","echo haha > /data/1.txt ; sleep 6000"]
    volumeMounts:
    - name: data
      mountPath: /data

  - name: read
    image: centos:centos7
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","cat /data/1.txt ; sleep 6000"]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    emptyDir: {}
EOF

2.2、检查语法错误并应用

[root@k8s-master01 yaml]# kubectl apply -f volmume-emptydir.yaml  --dry-run=client
pod/volume-emptydir configured (dry run)

[root@k8s-master01 yaml]# kubectl apply -f volmume-emptydir.yaml 
pod/volume-emptydir created

2.3、验证

[root@k8s-master01 yaml]# kubectl get pod -n test -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
volume-emptydir   2/2     Running   0          71s   10.244.203.227   k8s-worker04   <none>           <none>

## 进入到写的容器中,查看
[root@k8s-master01 yaml]# kubectl exec -it -n test volume-emptydir -c write -- /bin/bash
[root@volume-emptydir /]# ls
anaconda-post.log  bin  data  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@volume-emptydir /]# ls data/
1.txt
[root@volume-emptydir /]# cat data/1.txt 
haha
[root@volume-emptydir /]# exit
exit

## 进入到读的容器中,查看
[root@k8s-master01 yaml]# kubectl exec -it -n test volume-emptydir -c reade -- /bin/bash
Error from server (BadRequest): container reade is not valid for pod volume-emptydir
[root@k8s-master01 yaml]# kubectl exec -it -n test volume-emptydir -c read -- /bin/bash
[root@volume-emptydir /]# cat /data/1.txt 
haha

## 查看日志
[root@k8s-master01 yaml]# kubectl logs -n test volume-emptydir -c write
[root@k8s-master01 yaml]# kubectl logs -n test volume-emptydir -c read
haha

2.4、删除pod,删除后数据随即删除

[root@k8s-master01 yaml]# kubectl delete -f volmume-emptydir.yaml 
pod "volume-emptydir" deleted

三、本地存储卷之hostPath

1、介绍

将节点上的文件或目录挂载到 Pod 上,此时该目录会变成持久化存储目录,即使 Pod 被删除后重启,也可以重新加载到该目录,该目录下的文件不会丢失
应用场景

  • pod内与集群节点目录映射(pod中容器想访问节点上数据,例如监控,日志,只有监控访问到节点主机文件才能知道集群节点主机状态)
缺点
  • 如果集群节点挂掉,控制器在另一个集群节点拉起容器,数据就会变成另一台集群节点主机的了(无法实现数据共享)

2、实例

2.1、创建yaml文件

cat <<EOF> volmume-hostpath.yaml
apiVersion: v1
kind: Pod
metadata:
  name: volume-emptydir
  namespace: test
spec:
  containers:
  - name: busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","echo haha > /data/1.txt ; sleep 6000"]
    volumeMounts:
    - name: data                    # 容器挂载目录名称
      mountPath: /data                # 容器挂载目录

  volumes:
  - name: data                        # 对应容器挂载目录名称
    hostPath:
      path: /opt                # 对应集群节点挂载路径
      type: Directory            # 目录类型
EOF

2.2、检查语法并应用

[root@k8s-master01 yaml]# kubectl apply -f volmume-hostpath.yaml --dry-run=client
pod/volume-emptydir created (dry run)

[root@k8s-master01 yaml]# kubectl apply -f volmume-hostpath.yaml
pod/volume-emptydir created

2.3、验证pod描述信息

[root@k8s-master01 yaml]# kubectl get pod -n test volume-hostpath -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
volume-hostpath   1/1     Running   0          10s   10.244.203.229   k8s-worker04   <none>           <none>

[root@k8s-master01 yaml]# kubectl describe pod -n test volume-hostpath |tail -8
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  26s   default-scheduler  Successfully assigned test/volume-hostpath to k8s-worker04
  Normal  Pulled     26s   kubelet            Container image "busybox" already present on machine
  Normal  Created    26s   kubelet            Created container busybox
  Normal  Started    26s   kubelet            Started container busybox
[root@k8s-master01 yaml]# 

2.4、查看对应的节点上目录中是否存在1.txt文件及内容

## 上面查看描述信息,返现pod调度在k8s-worker04上,因此在该节点上查看
[root@k8s-worker04 ~]# cat /opt/1.txt 
haha
[root@k8s-worker04 ~]# 

2.5、删除pod后,对应节点上的数据不会消失

[root@k8s-master01 yaml]# kubectl delete -f volmume-hostpath.yaml 
pod "volume-hostpath" deleted

## 验证数据是否存在
[root@k8s-worker04 ~]# cat /opt/1.txt 
haha
[root@k8s-worker04 ~]#

四、网络存储卷之nfs

1、搭建nfs服务器

1.1、安装nfs服务器

[root@k8s-nfs ~]# mkdir /kuberneters
[root@k8s-nfs ~]# yum install -y nfs-utils rpcbind
[root@k8s-nfs ~]# vim /etc/exports
/kuberneters *(rw,no_root_squash,sync)
[root@k8s-nfs ~]# systemctl restart nfs-server.service 
[root@k8s-nfs ~]# systemctl enable  nfs-server.service

1.2、所有node节点安装nfs客户端相关软件包

[root@k8s-master01 ~]# yum install nfs-utils -y
[root@k8s-master02 ~]# yum install nfs-utils -y
[root@k8s-master03 ~]# yum install nfs-utils -y
[root@k8s-worker01 ~]# yum install nfs-utils -y
[root@k8s-worker02 ~]# yum install nfs-utils -y
[root@k8s-worker03 ~]# yum install nfs-utils -y
[root@k8s-worker04 ~]# yum install nfs-utils -y

1.3、所有节点验证nfs可用性

[root@k8s-master01 ~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

[root@k8s-master02 ~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

[root@k8s-master03 ~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

[root@k8s-worker01 ~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

[root@k8s-worker02 ~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

[root@k8s-worker03 ~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

[root@k8s-worker04~]# showmount -e 192.168.122.19
Export list for 192.168.122.19:
/kuberneters *

2、配置nfs相关实例

2.1、编写yaml文件

cat <<EOF> volume-nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  volume-nfs
  namespace: test
  labels:
    app:  nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24.0
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: documentroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: documentroot
        nfs:
          server: 192.168.122.19
          path: /kuberneters
EOF

2.1、检查语法错误并应用

[root@k8s-master01 yaml]# kubectl apply -f volume-nfs.yaml --dry-run=client
deployment.apps/volume-nfs created (dry run)

[root@k8s-master01 yaml]# kubectl apply -f volume-nfs.yaml
deployment.apps/volume-nfs created

2.2、验证pod

[root@k8s-master01 yaml]# kubectl get pod -n test -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
volume-nfs-6bcbb6fcf-nj5gv   1/1     Running   0          39s   10.244.39.224    k8s-worker03   <none>           <none>
volume-nfs-6bcbb6fcf-tbcpn   1/1     Running   0          39s   10.244.203.230   k8s-worker04   <none>           <none>

2.3、在nfs服务器上相关目录中配置nginx首页文件,并在pod中验证

# 在nfs中创建首页文件
[root@k8s-nfs kuberneters]# echo "haha" > index.html

# 在集群中验证
[root@k8s-master01 yaml]# curl 10.244.39.224
haha
[root@k8s-master01 yaml]# curl 10.244.203.230
haha

[root@k8s-master01 yaml]# kubectl exec -it -n test volume-nfs-6bcbb6fcf-nj5gv -- cat /usr/share/nginx/html/index.html
haha
[root@k8s-master01 yaml]# kubectl exec -it -n test volume-nfs-6bcbb6fcf-tbcpn -- cat /usr/share/nginx/html/index.html
haha
使用kuboard验证

invalid image(图片无法加载)

五、pv(持久存储卷)与pvc(持久存储卷声明)

1、认识pv与pvc

kubernetes存储卷的分类太丰富了,每种类型都要写相应的接口与参数才 行,这就让维护与管理难度加大。
persisentvolume(PV) 是配置好的一段存储(可以是任意类型的存储卷)

  • 也就是说将网络存储共享出来,配置定义成PV。

PersistentVolumeClaim(PVC)是用户pod使用PV的申请请求。

  • 用户不需要关心具体的volume实现细节,只需要关心使用需求。

2、pv与pvc之间的关系

  • pv提供存储资源(生产者)
  • pvc使用存储资源(消费者)
  • 使用pvc绑定pv

invalid image(图片无法加载)

3、实现nfs类型pv与pvc

注意:一个pv只能绑定一个pvc

3.1、编写pv的yaml文件

cat <<EOF> pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume    # 类型为pv
metadata:
  name: pv-nfs            # 名称
  namespace: test
spec:
  capacity:
    storage: 2Gi        # 大小
  accessModes:
    - ReadWriteMany       # 访问模式
  nfs:
    path: /kuberneters     # nfs共享目录
    server: 192.168.122.19  # nfs地址
EOF

访问模式有3种

3.2、检查语法并创建pv

[root@k8s-master01 yaml]# kubectl apply -f pv-nfs.yaml --dry-run=client
persistentvolume/pv-nfs created (dry run)

[root@k8s-master01 yaml]# kubectl apply -f pv-nfs.yaml
persistentvolume/pv-nfs created

3.3、验证pv

[root@k8s-master01 yaml]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs   2Gi#大小   RWX#权限     Retain#手动回收  Available#状态                                   101s

3.4、创建pvc

cat <<EOF> pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
  namespace: test
spec:
  accessModes:
  - ReadWriteMany            # 访问模式
  resources:                # 声明需要的资源
    requests:                # 请求资源
      storage: 2Gi            # 请求的空间大小
EOF

3.4、检查语法并应用

[root@k8s-master01 yaml]# kubectl apply -f pvc-nfs.yaml --dry-run=client
persistentvolumeclaim/pvc-nfs created (dry run)

[root@k8s-master01 yaml]# kubectl apply -f pvc-nfs.yaml
persistentvolumeclaim/pvc-nfs created
[root@k8s-master01 yaml]# 

3.5、验证

[root@k8s-master01 yaml]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    pv-nfs   100Gi      RWX                           65s
#Bound表示绑定状态,存储类没有显示

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-nfs   2Gi      RWX            Retain           Bound    default/pvc-nfs                           12m
#状态地方发生改变Bound,default/pvc-nfs

4、创建pod应用验证

4.1、编写yaml文件

cat <<EOF> deploy-nginx-nfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx-nfs
  namespace: test
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:
          claimName: pvc-nfs
EOF

4.2、检查语法及应用

[root@k8s-master01 yaml]# kubectl apply -f deploy-nginx-nfs.yaml --dry-run=client
deployment.apps/deploy-nginx-nfs created (dry run)

[root@k8s-master01 yaml]# kubectl apply -f deploy-nginx-nfs.yaml
deployment.apps/deploy-nginx-nfs created

4.3、验证pod信息

[root@k8s-master01 yaml]# kubectl describe pod -n test | tail -8
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  5m22s  default-scheduler  Successfully assigned test/deploy-nginx-nfs-8758f579b-r92lg to k8s-worker03
  Normal  Pulled     5m22s  kubelet            Container image "nginx:1.24.0" already present on machine
  Normal  Created    5m22s  kubelet            Created container nginx
  Normal  Started    5m22s  kubelet            Started container nginx
  
[root@k8s-master01 yaml]# kubectl get pod -n test -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
deploy-nginx-nfs-8758f579b-4kd7b   1/1     Running   0          5m33s   10.244.203.231   k8s-worker04   <none>           <none>
deploy-nginx-nfs-8758f579b-r92lg   1/1     Running   0          5m33s   10.244.39.225    k8s-worker03   <none>           <none>
使用kuboard查看存储信息

invalid image(图片无法加载)

4.4、验证挂载信息

# 在nfs中查看创建的首页信息
[root@k8s-nfs kuberneters]# cat index.html 
1111

# 在集群中查看pod中的应用对应的nginx服务首页信息
[root@k8s-master01 yaml]# kubectl exec -it -n test deploy-nginx-nfs-8758f579b-4kd7b -- cat /usr/share/nginx/html/index.html
1111
[root@k8s-master01 yaml]# curl 10.244.203.231
1111

六、subPath使用-多个子目录挂载

实现pod中的容器多个需要挂载的目录挂载到nfs共享目录下的多个子目录

1、创建yaml文件

[root@k8s-master01 yaml]# vim test-pv-pvc.yaml
apiVersion: v1
kind: PersistentVolume                    #类型为PV
metadata:
  name: pv-nfs                            # PV名称
  namespace: test                        # 所属命名空间
spec:
  capacity:
    storage: 1Gi                        # PV请求大小
  accessModes:
    - ReadWriteMany                        # PV请求访问模式
  nfs:                                    # NFS挂载信息
    path: /kuberneters
    server: 192.168.122.19
---
apiVersion: v1
kind: PersistentVolumeClaim                # 类型为PVC
metadata:
  name: pvc-nfs                            # PVC名称
  namespace: test                        # PVC所属命名空间
spec:
  accessModes:
    - ReadWriteMany                        # PVC的访问模式
  resources:
    requests:
      storage: 1Gi                        # PVC的请求资源大小,这里与PV一致
---
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  namespace: test
spec:
  containers:
    - name: c1
      image: busybox
      command: ["/bin/sleep", "100000"]
      volumeMounts:                                # 容器挂载信息
        - name: data                            # 挂载名称
          mountPath: /opt/data1                    # 子目录1
          subPath: data1                        # 子目录1名称
        - name: data                            # 挂载名称
          mountPath: /opt/data2                    # 子目录2
          subPath: data2                        # 子目录2名称
  volumes:                                        # 挂载存储卷
    - name: data                                # 这个地方名称与容器目录挂载名称一致
      persistentVolumeClaim:                    # 挂载类型PVC
        claimName: pvc-nfs                        # PVC名称

2、检查语法错误并应用

[root@k8s-master01 yaml]# kubectl apply -f test-pv-pvc.yaml --dry-run=client
persistentvolume/pv-nfs created (dry run)
persistentvolumeclaim/pvc-nfs created (dry run)
pod/pod1 created (dry run)

[root@k8s-master01 yaml]# kubectl apply -f test-pv-pvc.yaml
persistentvolume/pv-nfs created
persistentvolumeclaim/pvc-nfs created
pod/pod1 created

3、验证pod信息

[root@k8s-master01 yaml]# kubectl describe pod -n test |tail -n 8
  ----     ------            ----  ----               -------
  Normal   Pulling           56s   kubelet            Pulling image "busybox"
  Normal   Pulled            56s   kubelet            Successfully pulled image "busybox" in 442.509366ms
  Normal   Created           56s   kubelet            Created container c1
  Normal   Started           56s   kubelet            Started container c1
[root@k8s-master01 yaml]# kubectl get pod -n test -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          85s   10.244.203.232   k8s-worker04   <none>           <none>
[root@k8s-master01 yaml]# 

4、验证nfs服务端挂载信息

# 查看是否存在这两个目录
[root@k8s-nfs kuberneters]# ls
data1  data2

# 在data1目录中创建一个文件test.txt
[root@k8s-nfs kuberneters]# touch data1/test.txt
[root@k8s-nfs kuberneters]# ls data1/
test.txt

# 在集群中查看pod中的目录是否存在文件test.txt从而验证是否挂载成功
[root@k8s-master01 yaml]# kubectl exec -it -n test pod1 -- ls /opt/data1
test.txt

七、动态存储供给

1、动态存储供给介绍

什么是动态供给?
每次使用存储要先创建pv, 再创建pvc,真累! 所以我们可以实现使用存储的动态供给特性。

  • 静态存储需要用户申请PVC时保证容量和读写类型与预置PV的容量及读写类型完全匹配, 而动态存储则无需如此.
  • 管理员无需预先创建大量的PV作为存储资源

Kubernetes从1.4版起引入了一个新的资源对象StorageClass,可用于将 存储资源定义为具有显著特性的类(Class)而不是具体的PV。用户通过PVC直接向意向的类别发出申请,匹配由管理员事先创建 的PV,或者由其按需为用户动态创建PV,这样就免去
了需要先创建PV的过程。

2、通过NFS实现动态供给

PV对存储系统的支持可通过其插件来实现,目前,Kubernetes支持如下类型的插件。
官方地址:https://kubernetes.io/docs/concepts/storage/storage-classes/
官方插件是不支持NFS动态供给的,但是我们可以用第三方的插件来实现
第三方插件地址: https://github.com/kubernetes-retired/external-storage

invalid image(图片无法加载)

invalid image(图片无法加载)

invalid image(图片无法加载)

invalid image(图片无法加载)

2.1、创建storageclass存储类

创建storageclass文件

# 创建目录
[root@k8s-master01 ~]# mkdir sc
[root@k8s-master01 ~]# cd sc

# 打开class.yaml文件,并复制内容,在本地创建一个storageclass-nfs.yaml文件
[root@k8s-master01 sc]# vim storageclass-nfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass                    # 类型
metadata:
  name: nfs-client                    # 名称,要使用就需要调用此名称
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 动态供给插件
parameters:
  archiveOnDelete: "false"                # 删除数据时是否存档,false表示不存档,true表示存档

检查语法并应用

[root@k8s-master01 sc]# kubectl apply -f storageclass-nfs.yaml --dry-run=client
storageclass.storage.k8s.io/nfs-client created (dry run)

[root@k8s-master01 sc]# kubectl apply -f storageclass-nfs.yaml
storageclass.storage.k8s.io/nfs-client created
[root@k8s-master01 sc]#

检查storageclass存储类信息

[root@k8s-master01 sc]# kubectl get storageclasses.storage.k8s.io 
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  46s

# RECLAIMPOLICY PV的回收策略,pod或者pvc被删除后,pv是否删除还是保留
# VOLUMEBINDINGMODE Immediate 模式下pv与pvc立即绑定,主要是不等相关pod调度完成,不关心其运行节点,直接完成
# ALLOWVOLUMEEXPANSION pvc扩容
通过kuboard查看

invalid image(图片无法加载)

2.2、创建rbac

因为storage自动创建pv需要经过kube-apiserver,所以需要授权,打开rbac.yaml文件,将内容复制并在本地创建文件storageclass-nfs-rbac.yaml

[root@k8s-master01 sc]# vim storageclass-nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner                    # 创建pod的挂载的时候名字要与此保持一致
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

检查语法错误,并执行

[root@k8s-master01 sc]# kubectl apply -f storageclass-nfs-rbac.yaml --dry-run=client
serviceaccount/nfs-client-provisioner created (dry run)
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created (dry run)
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created (dry run)
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created (dry run)

[root@k8s-master01 sc]# kubectl apply -f storageclass-nfs-rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

2.3、创建动态供给的deployment

需要一个deployment来专门实现pv与pvc的自动创建

[root@k8s-master01 sc]# vim deploy-nfs-client-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lihuahaitang/nfs-subdir-external-provisioner:v4.0.2  # 注意镜像
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.122.19                    # nfs服务器地址
            - name: NFS_PATH
              value: /kuberneters                        # nfs服务器共享目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.122.19                    # nfs服务器地址
            path: /kuberneters                        # nfs服务器共享目录

检查语法错误并应用

[root@k8s-master01 sc]# kubectl apply -f deploy-nfs-client-provisioner.yaml --dry-run=client
deployment.apps/nfs-client-provisioner created (dry run)

[root@k8s-master01 sc]# kubectl apply -f deploy-nfs-client-provisioner.yaml
deployment.apps/nfs-client-provisioner created

验证pod信息

[root@k8s-master01 sc]# kubectl get pod -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nfs-client-provisioner-8557857d54-tbc78   1/1     Running   0          2m56s   10.244.203.233   k8s-worker04   <none>           <none>

[root@k8s-master01 sc]# kubectl describe pod nfs-client-provisioner-8557857d54-tbc78 |tail -8
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  3m8s  default-scheduler  Successfully assigned default/nfs-client-provisioner-8557857d54-tbc78 to k8s-worker04
  Normal  Pulling    3m7s  kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner"
  Normal  Pulled     3m6s  kubelet            Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner" in 1.754547424s
  Normal  Created    3m6s  kubelet            Created container nfs-client-provisioner
  Normal  Started    3m6s  kubelet            Started container nfs-client-provisioner

使用kuboard查看
invalid image(图片无法加载)

3、创建pod应用来验证动态存储供给

编写yaml文件

[root@k8s-master01 sc]# vim nginx-test.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
  namespace: test
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets: []
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx:1.24.0
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-client"                # 存储类名称
      resources:
        requests:
          storage: 1Gi

检查语法错误并应用

[root@k8s-master01 sc]# kubectl apply -f nginx-test.yaml --dry-run=client
service/nginx created (dry run)
statefulset.apps/web created (dry run)

[root@k8s-master01 sc]# kubectl apply -f nginx-test.yaml
service/nginx created
statefulset.apps/web created

验证pod信息

[root@k8s-master01 sc]# kubectl describe pod -n test | tail -8
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Normal   Scheduled         103s  default-scheduler  Successfully assigned test/web-1 to k8s-worker01
  Normal   Pulled            103s  kubelet            Container image "nginx:1.24.0" already present on machine
  Normal   Created           103s  kubelet            Created container nginx
  Normal   Started           103s  kubelet            Started container nginx
  
[root@k8s-master01 sc]# kubectl get pod -n test 
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          112s
web-1   1/1     Running   0          110s

[root@k8s-master01 sc]# kubectl get pod -n test -o wide
NAME    READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE   READINESS GATES
web-0   1/1     Running   0          116s   10.244.203.238   k8s-worker04   <none>           <none>
web-1   1/1     Running   0          114s   10.244.79.83     k8s-worker01   <none>           <none>

# 集群中验证PV
[root@k8s-master01 sc]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   REASON   AGE
pvc-1f491639-a82a-4021-99a2-6116925cbec6   1Gi        RWO            Delete           Bound    test/www-web-0   nfs-client              7m7s
pvc-557f9f14-93f1-4761-af1c-45ef13d32550   1Gi        RWO            Delete           Bound    test/www-web-1   nfs-client              7m5s

# 集群中验证PVC
[root@k8s-master01 sc]# kubectl get pvc --all-namespaces
NAMESPACE   NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test        www-web-0   Bound    pvc-1f491639-a82a-4021-99a2-6116925cbec6   1Gi        RWO            nfs-client     8m3s
test        www-web-1   Bound    pvc-557f9f14-93f1-4761-af1c-45ef13d32550   1Gi        RWO            nfs-client     8m1s

在nfs服务器上查看

[root@k8s-nfs kuberneters]# ls
test-www-web-0-pvc-1f491639-a82a-4021-99a2-6116925cbec6  test-www-web-1-pvc-557f9f14-93f1-4761-af1c-45ef13d32550

使用kuboard查看

invalid image(图片无法加载)

在nfs中添加一个nginx首页文件,并在集群中测试

# 添加测试文件
[root@k8s-nfs ~]# cd /kuberneters/
[root@k8s-nfs kuberneters]# ls
test-www-web-0-pvc-1f491639-a82a-4021-99a2-6116925cbec6  test-www-web-1-pvc-557f9f14-93f1-4761-af1c-45ef13d32550
[root@k8s-nfs kuberneters]# echo hahaha > index.html
[root@k8s-nfs kuberneters]# cp index.html test-www-web-0-pvc-1f491639-a82a-4021-99a2-6116925cbec6/.
[root@k8s-nfs kuberneters]# cp index.html test-www-web-1-pvc-557f9f14-93f1-4761-af1c-45ef13d32550/.

# 集群中查看pod的容器地址并测试
[root@k8s-master01 sc]# kubectl get pod -n test -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
web-0   1/1     Running   0          5m38s   10.244.203.238   k8s-worker04   <none>           <none>
web-1   1/1     Running   0          5m36s   10.244.79.83     k8s-worker01   <none>           <none>

[root@k8s-master01 sc]# curl 10.244.203.238
hahaha
[root@k8s-master01 sc]# curl 10.244.79.83
hahaha

[root@k8s-master01 sc]# kubectl exec -it -n test web-0 -- cat /usr/share/nginx/html/index.html
hahaha
[root@k8s-master01 sc]# kubectl exec -it -n test web-1 -- cat /usr/share/nginx/html/index.html
hahaha

删除动态存储供给pv与pvc,先删除pvc,在删除pv,需要注意的是,强制删除 PV 可能会导致数据丢失或数据不可恢复,请谨慎操作。

[root@k8s-master01 yaml]# kubectl delete pvc  <pvc-name>
[root@k8s-master01 yaml]# kubectl delete pv  <pv-name>
0
k8s
广告 广告

评论区