标签搜索

Kubemetes存储(真实数据)

cicaba
2025-08-14 / 0 评论 / 2 阅读 / 正在检测是否收录...

存储分类

元数据
◆configMap:用于保存配置数据(明文)
◆Secret:用于保存敏感性数据(编码)
◆Downward API:容器在运行时从 Kubernetes API服务器获取有关它们自身的信息

真实数据
◆Volume:用于存储临时或者持久性数据
◆PersistentVolume:申请制的持久化存储

Volume

Kubernetes 里的 Volume(卷) 是用来给容器提供 数据存储 的机制,主要解决两个问题:

  • 容器存储临时性:容器文件系统在容器退出或 Pod 删除后会被清空,Volume 可以保存数据。
  • 多容器数据共享:同一个 Pod 中多个容器可以通过 Volume 共享文件。

常见 Volume 类型
1. emptyDir
节点本地的临时存储,Pod 删除数据也删除
适合临时缓存、中间文件
存储位置可以是磁盘,也可以是内存(medium: Memory)

apiVersion: v1
kind: Pod
metadata:
  name: emptydir-demo
spec:
  containers:
  - name: app
    image: busybox
    command: ["/bin/sh", "-c"]
    args:
      - |
        echo "写入数据到 /cache/test.txt"
        echo "Hello from emptyDir" > /cache/test.txt
        sleep 3600
    volumeMounts:
    - name: cache-volume
      mountPath: /cache
  - name: sidecar
    image: busybox
    command: ["/bin/sh", "-c"]
    args:
      - |
        echo "读取 /cache/test.txt"
        cat /cache/test.txt || echo "文件不存在"
        sleep 3600
    volumeMounts:
    - name: cache-volume
      mountPath: /cache
  volumes:
  - name: cache-volume
    emptyDir: {}

  #使用内存介质
  #- name: cache-volume
  #  emptyDir:
  #    medium: Memory
  #    sizeLimit: "64Mi"

在 kubelet 的工作目录 ( root-dir 参数控制),默认为 /var/lib/kubelet, 会为每个使用了 emptyDir: {} 的 pod 创建一个目录,格式如 /var/lib/kubelet/pods/{podid}/volumes/kubernetes.io~empty-dir/, 所有放在 emptyDir 中数据,最终都是落在了 node 的上述路径中

2. hostPath
挂载宿主机的目录/文件到容器
适合需要访问宿主机日志、配置文件等场景
可移植性差,不适合多节点部署
常用的类型
图片2.png

apiVersion: v1
kind: Pod
metadata:
  name: hostpath-pod
spec:
  containers:
  - image: nginx
    name: myapp
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory
  1. nfs / glusterfs / cephfs
    网络存储卷,可实现多节点共享数据
  2. projected
    将多个 Volume(如 ConfigMap、Secret、DownwardAPI)合并到一个挂载点

PV/PVC

通过声明式申请持久化存储(PV)
容量:PV 的值不小于 PVC 要求,可以大于最好一致
读写策略:完全匹配

  • 单节点读写 - ReadWriteOnce - RWO
  • 多节点只读 - ReadOnlyMany - ROX
  • 多节点读写 - ReadWriteMany - RWX

存储类:PV 的类与 PVC 的类必须一致,不存在包容降级关系
后端可以是 NFS、Ceph、云盘等
数据独立于 Pod 生命周期

回收策略:

  • Retain(保留):手动回收
  • Recycle(回收):基本擦除(rm -rf /thevolume/*
  • Delete(删除):关联的存储资产(例如 AWS EBS、GCE PD、Azure Disk 和 OpenStack Cinder 卷)将被删除 当前,只有 NFS 和 HostPath 支持回收策略。AWS EBS、GCE PD、Azure Disk 和 Cinder 卷支持删除策略

当启用 PVC 保护功能时,如果用户删除了一个 pod 正在使用的 PVC,则该 PVC 不会被立即删除。PVC 的删除将被推迟,直到 PVC 不再被任何 pod 使用
部署 PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /data/nfs
    server: 10.66.66.10

创建服务并使用 PVC

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: wangyanglinux/myapp:v1.0
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/local/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 1Gi  

storageClass

StorageClass 是一种资源对象,用于定义持久卷(Persistent Volumes)的动态供给(Dynamic Provisioning)策略。StorageClass 允许管理员定义不同类型的存储,并指定如何动态创建持久卷以供应用程序使用。这使得 Kubernetes 集群中的存储管理更加灵活和自动化。

nfs-client-provisioner

  • nfs-client-provisioner 是一个 Kubernetes 供应商,用于动态提供由 NFS(Network File System)共享支持的持久卷。在 Kubernetes 中,持久卷是独立于 pod 存在的存储资源,可以在 pod 重新启动或重新调度时持久地存储数据。
  • nfs-client-provisioner 自动化了根据需要创建持久卷的过程,通过与 NFS 服务器交互。在需要在 Kubernetes 集群中为应用程序动态分配存储而无需手动管理 NFS 共享和持久卷创建的情况下,这尤其有用。

部署 nfs-client-provisioner

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: nfs-storageclass
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          # image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              # value: <YOUR NFS SERVER HOSTNAME>
              value: 192.168.66.11
            - name: NFS_PATH
              # value: /var/nfs
              value: /nfsdata/share
      volumes:
        - name: nfs-client-root
          nfs:
            # server: <YOUR NFS SERVER HOSTNAME>
            server: 192.168.66.11
            # share nfs path
            path: /nfsdata/share
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-storageclass
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-storageclass
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-storageclass
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-storageclass
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-storageclass
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
  namespace: nfs-storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  pathPattern: ${.PVC.namespace}/${.PVC.name}
  onDelete: delete    

创建名字空间

$ kubectl create ns nfs-storageclass

测试 Pod

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: test-claim
 annotations: 
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 1Mi
 storageClassName: nfs-client
---
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: wangyanglinux/myapp:v1.0
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/usr/local/nginx/html"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
1

评论 (0)

取消