容器的生命周期可能很短,会被频繁地创建和销毁。那么容器在销毁时,保存在容器中的数据也会被清除。这种结果对用户来说,在某些情况下是不乐意看到的。为了持久化保存容器的数据,kubernetes引入了Volume的概念。Volume是Pod中能够被多个容器访问的共享目录,它被定义在Pod上,然后被一个Pod里的多个容器挂载到具体的文件目录下,kubernetes通过Volume实现同一个Pod中不同容器之间的数据共享以及数据的持久化存储。Volume的生命容器不与Pod中单个容器的生命周期相关,当容器终止或者重启时,Volume中的数据也不会丢失。kubernetes的Volume支持多种类型,比较常见的有下面几个:
- 简单存储:EmptyDir、HostPath、NFS
- 高级存储:PV、PVC
- 配置存储:ConfigMap、Secret
1、基本存储
1.1、EmptyDir
EmptyDir是最基础的Volume类型,一个EmptyDir就是Host上的一个空目录。EmptyDir是在Pod被分配到Node时创建的,它的初始内容为空,并且无须指定宿主机上对应的目录文件,因为kubernetes会自动分配一个目录,当Pod销毁时, EmptyDir中的数据也会被永久删除。EmptyDir用途如下:
- 临时空间,例如用于某些应用程序运行时所需的临时目录,且无须永久保留
- 一个容器需要从另一个容器中获取数据的目录(多容器共享目录)
接下来,通过一个容器之间文件共享的案例来使用一下EmptyDir。 在一个Pod中准备两个容器nginx和busybox,然后声明一个Volume分别挂在到两个容器的目录中,然后nginx容器负责向Volume中写日志,busybox中通过命令将日志内容读到控制台。
先将之前dev下创建的pod和控制器等都删掉
[root@openEuler-1 ~]# kubectl delete ns dev
[root@openEuler-1 ~]# kubectl create ns dev
namespace/dev created
创建一个volume-emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-emptydir
namespace: dev
spec:
volumes:
- name: logs-volume
emptyDir: {}
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
volumeMounts:
- name: logs-volume
mountPath: /var/log/nginx
- name: busybox
image: busybox:1.30
command: [ "/bin/sh", "-c", "tail -f /logs/access.log"]
volumeMounts:
- name: logs-volume
mountPath: /logs
[root@openEuler-1 ~]# kubectl create -f volume-emptydir.yaml
pod/volume-emptydir created
[root@openEuler-1 ~]# kubectl get pods volume-emptydir -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volume-emptydir 2/2 Running 0 3m56s 100.115.147.85 openeuler-2 <none> <none>
# 给一个默认主页
[root@openEuler-1 ~]# kubectl exec -it volume-emptydir -c nginx -n dev -- /bin/bash
root@volume-emptydir:/# echo "EmptyDir test" > /usr/share/nginx/html/index.html
root@volume-emptydir:/# exit
exit
[root@openEuler-1 ~]# curl 100.115.147.85
EmptyDir test
# 查看日志
[root@openEuler-2 ~]# kubectl logs -f volume-emptydir -n dev -c busybox
100.108.116.64 - - [24/Aug/2025:03:09:50 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.79.1" "-"
100.108.116.64 - - [24/Aug/2025:03:15:47 +0000] "GET / HTTP/1.1" 200 14 "-" "curl/7.79.1" "-"
1.2、HostPath
EmptyDir中数据不会被持久化,它会随着Pod的结束而销毁,如果想简单的将数据持久化到主机中,可以选择HostPath。HostPath就是将Node主机中一个实际目录挂在到Pod中,以供容器使用,这样的设计就可以保证Pod销毁了,但是数据依据可以存在于Node主机上。
创建一个volume-hostpath.yaml:
apiVersion: v1
kind: Pod
metadata:
name: volume-hostpath
namespace: dev
spec:
volumes:
- name: logs-volume
hostPath:
path: /opt/logs
type: DirectoryOrCreate
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
volumeMounts:
- name: logs-volume
mountPath: /var/log/nginx
- name: busybox
image: busybox:1.30
command: [ "/bin/sh", "-c", "tail -f /logs/access.log"]
volumeMounts:
- name: logs-volume
mountPath: /logs
[root@openEuler-1 ~]# kubectl create -f volume-hostpath.yaml
pod/volume-hostpath created
[root@openEuler-1 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volume-hostpath 2/2 Running 0 21s 100.115.147.84 openeuler-2 <none> <none>
[root@openEuler-1 ~]# curl 100.115.147.84
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
这里偷懒没有设置默认主页,调度到openeuler-2,查看日志
1.3、NFS
HostPath可以解决数据持久化的问题,但是一旦Node节点故障了,Pod如果转移到了别的节点,又会出现问题了,此时需要准备单独的网络存储系统,比较常用的用NFS、CIFS。NFS是一个网络文件存储系统,可以搭建一台NFS服务器,然后将Pod中的存储直接连接到NFS系统上,这样的话,无论Pod在节点上怎么转移,只要Node跟NFS的对接没问题,数据就可以成功访问
# 在所有节点上上安装nfs服务
yum install nfs-utils -y
# 创建共享目录,将共享目录暴露,重启服务
[root@openEuler-1 ~]# mkdir /nfstes
[root@openEuler-1 ~]# vim /etc/expose
[root@openEuler-1 ~]# cat /etc/exports
/nfstest *(rw,no_root_squash)
[root@openEuler-1 ~]# systemctl restart rpcbind nfs-server
[root@openEuler-2 logs]# showmount -e 192.168.93.10
Export list for 192.168.93.10:
/nfstest *
就可以编写pod的配置文件了,创建volume-nfs.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-nfs
namespace: dev
spec:
volumes:
- name: logs-volume
nfs:
server: 192.168.93.10
path: /nfstest
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
volumeMounts:
- name: logs-volume
mountPath: /var/log/nginx
- name: busybox
image: busybox:1.30
command: [ "/bin/sh", "-c", "tail -f /logs/access.log"]
volumeMounts:
- name: logs-volume
mountPath: /logs
测试
[root@openEuler-1 ~]# kubectl create -f volume-nfs.yaml
pod/volume-nfs created
[root@openEuler-1 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volume-nfs 2/2 Running 0 5s 100.115.147.86 openeuler-2 <none> <none>
[root@openEuler-1 ~]# ls /nfstest/
access.log error.log
2、配置存储
传统架构中,配置文件往往被保存在宿主机上,程序启动时可以指定某个配置文件,但是使用容器部署时,容器所在的节点并不固定,所以不能使用这种方式,此处在构建镜像时,如果把配置文件也放在容器里面,那么配置文件一旦有更改的话,也是一件非常麻烦的事情。所以Kubernetes抽象了一个ConfigMap的概念,将配置与Pod和组件分开,这有助于保持工作负载的可移植性,使配置更易于更改和管理。比如在生产环境中,可以将Nginx、Redis等应用的配置文件存储在ConfigMap上,然后将其挂载即可使用。
相对于Secret,ConfigMap更倾向于存储和共享非敏感、未加密的配置信息,假如是在集群中使用敏感信息,最好使用Secret。
使用目录创建
[root@openEuler-1 ~]# mkdir test
[root@openEuler-1 ~]# cp /etc/hosts /etc/resolv.conf ./test
[root@openEuler-1 ~]# kubectl create configmap my-config --from-file=/root/test -n dev
configmap/my-config created
[root@openEuler-1 ~]# kubectl describe configmaps my-config -n dev
使用文件创建
[root@openEuler-1 ~]# cat game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
# 默认键名为文件名
[root@openEuler-1 ~]# kubectl create cm game-config2 --from-file=/root/game.properties -n dev
configmap/game-config2 created
[root@openEuler-1 ~]# kubectl get cm game-config2 -n dev
NAME DATA AGE
game-config2 1 25s
# 自定义键名
[root@openEuler-1 ~]# kubectl create cm game-config3 --from-file=self-key=/root/game.properties -n dev
configmap/game-config3 created
[root@openEuler-1 ~]# kubectl get cm game-config3 -n dev -o yaml
apiVersion: v1
data:
self-key: |
也可以使用--from-file多次传入参数以从多个数据源创建ConfigMap,此方式和基于文件夹类似,只不过可以单独设置ConfigMap的Key名:
基于ENV文件创建ConfigMap
[root@openEuler-1 ~]# vim game-env-file.properties
[root@openEuler-1 ~]# cat game-env-file.properties
enemies=aliens
lives=3
allowed="true"
[root@openEuler-1 ~]# kubectl create cm game-config-env-file --from-env-file=/root/game-env-file.properties -n dev
configmap/game-config-env-file created
# 键值对的形式创建
[root@openEuler-1 ~]# kubectl get cm game-config-env-file -n dev -o yaml
apiVersion: v1
data:
allowed: '"true"'
enemies: aliens
lives: "3"
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-24T05:18:20Z"
name: game-config-env-file
namespace: dev
resourceVersion: "279563"
uid: 9721de6e-c5bf-4ef8-bc32-91c810036bd8
注意:如果使用--from-env-file多次传递参数以从多个数据源创建ConfigMap, 仅最后一个ENV生效(1.23以上版本支持多个--from-env-file参数)。
基于字符值创建ConfigMap
[root@openEuler-1 ~]# kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm -n dev
configmap/special-config created
[root@openEuler-1 ~]# kubectl get cm special-config -n dev -o yaml
apiVersion: v1
data:
special.how: very
special.type: charm
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-24T05:29:19Z"
name: special-config
namespace: dev
resourceVersion: "281485"
uid: aac842df-a314-418c-883c-1b62275d3988
3、ConfigMap实践
3.1、自定义文件名挂载ConfigMap
[root@openEuler-1 ~]# vim pod-configmap.yaml
[root@openEuler-1 ~]# cat pod-configmap.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-configmap
namespace: dev
spec:
containers:
- name: nginx
image: nginx:1.17.1
volumeMounts:
- name: config
mountPath: /configmap/config
volumes:
- name: config
configMap:
name: game-config-env-file
[root@openEuler-1 ~]# kubectl delete -f pod-configmap.yaml -n dev
pod "pod-configmap" deleted
[root@openEuler-1 ~]# kubectl create -f pod-configmap.yaml -n dev
pod/pod-configmap created
[root@openEuler-1 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-configmap 1/1 Running 0 3s 100.105.212.181 openeuler-3 <none> <none>
volume-nfs 2/2 Running 0 96m 100.115.147.86 openeuler-2 <none> <none>
# 验证是否挂载上
[root@openEuler-1 ~]# kubectl exec -it pod-configmap -n dev -- /bin/bash
root@pod-configmap:/# cd configmap/config
root@pod-configmap:/configmap/config# ls
allowed enemies lives
root@pod-configmap:/configmap/config# more allowed
"true"
# 也可编辑
kubectl edit cm configmap -n dev
3.2、使用valueFrom定义容器环境变量
首先使用 --from-literal 或 --from-env-file 创 建 key=value 形 式 的 ConfigMap,然后将ConfigMap中定义的值special.how分配给Deployment(其他资源 也可以 中一个Pod当作环境变量 ,并命名为SPECIAL_LEVEL_KEY 。将ConfigMap中的数据用作Pod的变量有两种方式,一个是.containers.env.valueFrom,另一个是.containers.envFrom。首先看一下valueFrom的用法(注意valueFrom的配置位置):
编辑:pod-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: dev
spec:
containers:
- name: pod1
image: busybox:1.30
command: [ "/bin/sh", "-c", "env"]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.level
restartPolicy: Never
3.3、使用envFrom定义容器的环境变量
上述演示的valueFrom通常用于使用ConfigMap的单个Key设置环境变量,但实际使用时更常用的是把ConfigMap里面所有的数据都作为环境变量,此时可以使用envFrom参数,对应的YAML文件如下(注意envFrom的配置位置):
[root@openEuler-1 ~]# vim deploy-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: env-valuefrom
name: env-valuefrom
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: env-valuefrom
template:
metadata:
labels:
app: env-valuefrom
spec:
containers:
- image: busybox:1.30
name: env-valuefrom
command: ["/bin/sh", "-c", "env"]
envFrom:
- configMapRef:
name: game-config-env-file
prefix: fromCM_
[root@openEuler-1 ~]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
env-valuefrom-756784d7b8-mwsw4 0/1 Completed 2 (43s ago) 17s
3.4、热更新
正常情况下,我们可以通过如下配置,在启动的 Pod 容器里面获取到 ConfigMap 中配置的信息。
[root@openEuler-1 ~]# cat host_map.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: log-config
namespace: dev
data:
log_level: INFO
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
run: my-nginx # 确保selector与template的labels匹配
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:1.17.1
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: log-config
[root@openEuler-1 ~]# kubectl create -f host_map.yaml
configmap/log-config created
deployment.apps/my-nginx created
[root@openEuler-1 ~]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-575b49dfbb-c4h4p 1/1 Running 0 20s 100.115.147.91 openeuler-2 <none> <none>
# 查看log_level的等级,并且修改为warning,大概需要三十秒的时间,耐心等待
[root@openEuler-1 ~]# kubectl exec -it my-nginx-575b49dfbb-c4h4p -n dev -- cat /etc/c onfig/log_level
INFO[root@openEuler-1 ~]# kubectl edit configmap/log-config -n dev
configmap/log-config edited
3.5、Secret
在kubernetes中,还存在一种和ConfigMap非常类似的对象,称为Secret对象。它主要用于存储敏感信息,例如密码、秘钥、证书等等。
Secret有三种类型
- Opaque:base64 编码格式的 Secret,用来存储密码、密钥等;但数据也可以通过base64– decode解码得到原始数据,所有加密性很弱。
- Service Account:用来访问Kubernetes API,由Kubernetes自动创建,并且会自动挂载到Pod的 /run/secrets/kubernetes.io/serviceaccount 目录中。
- kubernetes.io/dockerconfigjson : 用来存储私有docker registry的认证信息。
注意:Opaque 类型的数据是一个 map 类型,要求value是base64编码。
[root@openEuler-1 ~]# echo -n "admin" > username.txt
[root@openEuler-1 ~]# echo -n "123" > password.txt
[root@openEuler-1 ~]# kubectl create secret generic db-user-pass --from-file=username.txt --from-file=password.txt -n dev
secret/db-user-pass created
[root@openEuler-1 ~]# kubectl get secrets -n dev
NAME TYPE DATA AGE
db-user-pass Opaque 2 29s
[root@openEuler-1 ~]# kubectl describe secrets db-user-pass -n dev
Name: db-user-pass
Namespace: dev
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
username.txt: 5 bytes
password.txt: 3 bytes
[root@openEuler-1 ~]# kubectl get secrets db-user-pass -n dev -o yaml
apiVersion: v1
data:
password.txt: MTIz
username.txt: YWRtaW4=
kind: Secret
metadata:
creationTimestamp: "2025-08-25T02:12:11Z"
name: db-user-pass
namespace: dev
resourceVersion: "308315"
uid: 65548e3d-a5cc-4dd7-8473-413ad1053355
type: Opaque
[root@openEuler-1 ~]# echo -n 'admin' | base64
YWRtaW4=
[root@openEuler-1 ~]# echo -n 'password' | base64
cGFzc3dvcmQ=
解码:
password[root@openEuler-1 ~]# echo -n 'YWRtaW4=' | base64 --decode
admin[root@openEuler-1 ~]#
[root@openEuler-1 ~]# echo -n 'MTIz' | base64 --decode
123[root@openEuler-1 ~]#
Secret使用方式①:
Secret的yaml文件创建:
[root@openEuler-1 ~]# cat sa1.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: dev
type: Opaque
data:
user: YWRtaW4=
pass: MWYyZDFlMmU2N2Rm
通过Volume挂载方式:
[root@openEuler-1 ~]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
my-nginx-575b49dfbb-c4h4p 1/1 Running 0 63m
test-projected-volume 1/1 Running 0 28s
[root@openEuler-1 ~]# cat sa1.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: dev
type: Opaque
data:
user: YWRtaW4=
pass: MWYyZDFlMmU2N2Rm
[root@openEuler-1 ~]# cat pod-secret.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
namespace: dev
spec:
containers:
- name: test-secret-volume
image: busybox:1.30
args:
- sleep
- "86400"
volumeMounts:
- name: mysql-cred
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: mysql-cred
projected:
sources:
- secret:
name: mysecret
items:
- key: user
path: user
- key: pass
path: pass
[root@openEuler-1 ~]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
my-nginx-575b49dfbb-c4h4p 1/1 Running 0 63m
test-projected-volume 1/1 Running 0 28s
验证一下这些 Secret 对象是不是已经在容器里了::
[root@openEuler-1 ~]# kubectl exec -it test-projected-volume -n dev -- /bin/sh
/ # ls /projected-volume
pass user
/ # more /projected-volume/pass
1f2d1e2e67df/ #
/ # more /projected-volume/user
admin/ #
/ #
Secret使用方式②:
通过环境变量
[root@openEuler-1 ~]# cat pod-env-secret.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-secret-env
namespace: dev
spec:
containers:
- name: myapp
image: busybox
args:
- sleep
- "86400"
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: user
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: pass
restartPolicy: Never
[root@openEuler-1 ~]# kubectl create -f pod-env-secret.yaml
pod/pod-secret-env created
[root@openEuler-1 ~]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
my-nginx-575b49dfbb-c4h4p 1/1 Running 0 106m
pod-secret-env 1/1 Running 0 25s
[root@openEuler-1 ~]# kubectl exec -it pod-secret-env -n dev -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=pod-secret-env
TERM=xterm
SECRET_USERNAME=admin
SECRET_PASSWORD=1f2d1e2e67df
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
HOME=/root
4、高级存储PV和PVC
PV(Persistent Volume)是持久化卷的意思,是对底层的共享存储的一种抽象。一般情况下PV由kubernetes管理员进行创建和配置,它与底层具体的共享存储技术有关,并通过插件完成与共享存储的对接。
PVC(Persistent Volume Claim)是持久卷声明的意思,是用户对于存储需求的一种声明。换句话说,PVC其实就是用户向kubernetes系统发出的一种资源需求申请。
使用了PV和PVC之后,工作可以得到进一步的细分:
- 存储:存储工程师维护
- PV: kubernetes管理员维护
- PVC:kubernetes用户维护
4.1、PV
PV是存储资源的抽象,下面是资源清单文件
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
nfs: # 存储类型,与底层真正存储对应
capacity: # 存储能力,目前只支持存储空间的设置
storage: 2Gi
accessModes: # 访问模式
storageClassName: # 存储类别
persistentVolumeReclaimPolicy: # 回收策略
PV 的关键配置参数说明:
存储类型
底层实际存储的类型,kubernetes支持多种存储类型,每种存储类型的配置都有所差异
存储能力(capacity)
目前只支持存储空间的设置( storage=1Gi ),不过未来可能会加入IOPS、吞吐量等指标的配置
访问模式(accessModes)
用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
- ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
- ReadOnlyMany(ROX): 只读权限,可以被多个节点挂载
- ReadWriteMany(RWX):读写权限,可以被多个节点挂载
- 需要注意的是,底层不同的存储类型可能支持的访问模式不同
回收策略(persistentVolumeReclaimPolicy)
当PV不再被使用了之后,对其的处理方式。目前支持三种策略:
- Retain (保留) 保留数据,需要管理员手工清理数据
- Recycle(回收) 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*
- Delete (删除) 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务
需要注意的是,底层不同的存储类型可能支持的回收策略不同
存储类别
PV可以通过storageClassName参数指定一个存储类别:
- 具有特定类别的PV只能与请求了该类别的PVC进行绑定
- 未设定类别的PV则只能与不请求任何类别的PVC进行绑定
状态(status)
一个 PV 的生命周期中,可能会处于4中不同的阶段:
- Available(可用): 表示可用状态,还未被任何 PVC 绑定
- Bound(已绑定): 表示 PV 已经被 PVC 绑定
- Released(已释放): 表示 PVC 被删除,但是资源还未被集群重新声明
- Failed(失败): 表示该 PV 的自动回收失败
4.2、实验(PV)
准备NFS环境
[root@openEuler-1 ~]# mkdir -p /data/pv{1..3}
[root@openEuler-1 ~]# more /etc/exports
/nfstest *(rw,no_root_squash)
/data/pv1 *192.168.93.0/24(rw,no_root_squash)
/data/pv2 *192.168.93.0/24(rw,no_root_squash)
/data/pv3 *192.168.93.0/24(rw,no_root_squash)
创建pv.yaml文件
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/pv1
server: openeuler-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/pv2
server: openeuler-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/pv3
server: openeuler-1
创建pv
[root@openEuler-1 ~]# kubectl create -f pv.yaml
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@openEuler-1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv1 1Gi RWX Retain Available <unset> 11s
pv2 2Gi RWX Retain Available <unset> 11s
pv3 3Gi RWX Retain Available <unset> 11s
4.3、PVC
PVC是资源的申请,用来声明对存储空间、访问模式、存储类别需求信息。下面是资源清单文件:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
namespace: dev
spec:
accessModes: # 访问模式
selector: # 采用标签对PV选择
storageClassName: # 存储类别
resources: # 请求空间
requests:
storage: 5Gi
PVC 的关键配置参数说明:
访问模式(accessModes)
用于描述用户应用对存储资源的访问权限
选择条件(selector)
通过Label Selector的设置,可使PVC对于系统中己存在的PV进行筛选
存储类别(storageClassName)
PVC在定义时可以设定需要的后端存储的类别,只有设置了该class的pv才能被系统选出
资源请求(Resources )
描述对存储资源的请求
4.4、实验(PVC)
创建pvc.yaml,申请pv
[root@openEuler-1 ~]# vim pvc.yaml
[root@openEuler-1 ~]# more pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
namespace: dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
namespace: dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
namespace: dev
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
[root@openEuler-1 ~]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@openEuler-1 ~]# kubectl get pvc -n dev
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pv1 1Gi RWX <unset> 13s
pvc2 Bound pv2 2Gi RWX <unset> 13s
pvc3 Bound pv3 3Gi RWX <unset> 13s
创建pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: dev
spec:
containers:
- name: busybox
image: busybox:1.30
command: ["/bin/sh", "-c", "while true;do echo pod1 >> /root/out.txt; sleep 10; done;"]
volumeMounts:
- name: volume
mountPath: /root/
volumes:
- name: volume
persistentVolumeClaim:
claimName: pvc1
readOnly: false
检验:
[root@openEuler-1 ~]# kubectl create -f pods.yaml
pod/pod1 created
[root@openEuler-1 ~]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
my-nginx-575b49dfbb-c4h4p 1/1 Running 0 3h43m
pod-secret-env 1/1 Running 0 116m
pod1 1/1 Running 0 3s
# 查看nfs中的文件存储
[root@openEuler-1 ~]# more /data/pv1/out.txt
pod1
pod1
pod1
pod1
pod1
pod1
4.5、生命周期
PVC和PV是一一对应的,PV和PVC之间的相互作用遵循以下生命周期:
- 资源供应:管理员手动创建底层存储和PV
- 资源绑定:用户创建PVC,kubernetes负责根据PVC的声明去寻找PV,并绑定在用户定义好PVC之后,系统将根据PVC对存储资源的请求在已存在的PV中选择一个满足条件的,一旦找到,就将该PV与用户定义的PVC进行绑定,用户的应用就可以使用这个PVC了,如果找不到,PVC则会无限期处于Pending状态,直到等到系统管理员创建了一个符合其要求的PV。PV一旦绑定到某个PVC上,就会被这个PVC独占,不能再与其他PVC进行绑定了
- 资源使用:用户可在pod中像volume一样使用pvcPod使用Volume的定义,将PVC挂载到容器内的某个路径进行使用。
- 资源释放:用户删除pvc来释放pv当存储资源使用完毕后,用户可以删除PVC,与该PVC绑定的PV将会被标记为“已释放”,但还不能立刻与其他PVC进行绑定。通过之前PVC写入的数据可能还被留在存储设备上,只有在清除之后该PV才能再次使用。
- 资源回收:kubernetes根据pv设置的回收策略进行资源的回收对于PV,管理员可以设定回收策略,用于设置与之绑定的PVC释放资源之后如何处理遗留数据的问题。只有PV的存储空间完成回收,才能供新的PVC绑定和使用
5、StorageClass
5.1、什么是StorageClass
在一个大规模的Kubernetes集群里,可能有成千上万个PVC,这就意味着运维人员必须实现创建出这个多个PV,此外,随着项目的需要,会有新的PVC不断被提交,那么运维人员就需要不断的添加新的,满足要求的PV,否则新的Pod就会因为PVC绑定不到PV而导致创建失败。而且通过 PVC 请求到一定的存储空间也很有可能不足以满足应用对于存储设备的各种需求,而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,kubernetes根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了。
Kubernetes提供了一套可以自动创建PV的机制,即:Dynamic Provisioning.而这个机制的核心在于:StorageClass这个API对象. StorageClass对象会定义下面两部分内容:
1,PV的属性.比如,存储类型,Volume的大小等.
2,创建这种PV需要用到的存储插件
有了这两个信息之后,Kubernetes就能够根据用户提交的PVC,找到一个对应的StorageClass,之后 Kubernetes就会调用该StorageClass声明的存储插件,进而创建出需要的PV.但是其实使用起来是一件很简单的事情,你只需要根据自己的需求,编写YAML文件即可,然后使用kubectl create命令执行即可
5.2、StorageClass运行原理及运作流程
要使用 StorageClass,我们就得安装对应的自动配置程序,比如我们这里存储后端使用的是 nfs,那么我们就需要使用到一个 nfs-client 的自动配置程序,我们也叫它 Provisioner,这个程序使用我们已经配置好的 nfs 服务器,来自动创建持久卷,也就是自动帮我们创建 PV。
1.自动创建的 PV 以${namespace}-${pvcName}-${pvName}这样的命名格式创建在 NFS 服务器上的共享数据目录中2.而当这个 PV 被回收后会以archieved-${namespace}-${pvcName}-${pvName}这样的命名格式存在NFS 服务器上。
运作流程:

5.3、搭建StorageClass+NFS的步骤:
5.3.1、创建一个可用的NFS Serve
[root@openEuler-1 ~]# mkdir -p /nfs/kubernetes
[root@openEuler-1 ~]# vim /etc/exports
[root@openEuler-1 ~]# exportfs -arv
exporting 192.168.93.0/24:/nfs/kubernetes
exporting 192.168.93.0/24:/data/pv3
exporting 192.168.93.0/24:/data/pv2
exporting 192.168.93.0/24:/data/pv1
exporting *:/nfstest
5.3.2、创建Service Account.这是用来管控NFS provisioner在k8s集群中运行的权限
[root@openEuler-1 ~]# mkdir storangeclass
[root@openEuler-1 ~]# cd storangeclass/
[root@openEuler-1 storangeclass]# vim rbac.yaml
[root@openEuler-1 storangeclass]# more rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default # 根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
[root@openEuler-1 storangeclass]# kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@openEuler-1 storangeclass]# kubectl get role,rolebinding
NAME CREATED AT
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner 2025-08-25T06:40:44Z
NAME ROLE AGE
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner Role/leader-locking-nfs-client-provisioner 10m
5.3.3、创建StorageClass.负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理
[root@openEuler-1 storangeclass]# cat nfs-StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量
parameters:
archiveOnDelete: "false"
[root@openEuler-1 storangeclass]# kubectl create -f nfs-StorageClass.yaml
[root@openEuler-1 storangeclass]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage nfs-storage Delete Immediate false 8m40s
5.3.4、创建NFS provisioner.有两个功能,一个是在NFS共享目录下创建挂载点(volume),另一个则是建了PV并将PV与NFS的挂载点建立关联
[root@openEuler-1 storangeclass]# cat nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default # 与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage # provisioner名称,需与storageclass配置一致
- name: NFS_SERVER
value: 192.168.93.10 # NFS Server IP地址
- name: NFS_PATH
value: /nfs/kubernetes # NFS挂载卷
volumes:
- name: nfs-client-root
nfs:
server: 192.168.93.10 # NFS Server IP地址
path: /nfs/kubernetes # NFS 挂载卷
创建nfs-provisioner
[root@openEuler-1 storangeclass]# kubectl apply -f nfs-provisioner.yaml
deployment.apps/nfs-client-provisioner created
[root@openEuler-1 storangeclass]# kubectl get pod,deploy
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-6db6989d65-hw69r 1/1 Running 0 8m21s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-client-provisioner 1/1 1 1 8m21s
5.4、创建pod+pvc,检查是否部署成功
[root@openEuler-1 storangeclass]# cat pvc1.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
storageClassName: managed-nfs-storage
确保PVC状态为Bound(默认自动创建PV且Bound状态)
[root@openEuler-1 storangeclass]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/test-claim Bound pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8 10Mi RWX managed-nfs-storage <unset> 13m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8 10Mi RWX Delete Bound default/test-claim managed-nfs-storage <unset> 13m
创建测试pod,查看是否可以正常挂载
[root@openEuler-1 storangeclass]# cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:1.35
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1" # 创建一个SUCCESS文件后退出
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim # 与PVC名称保持一致
检查结果:
[root@openEuler-1 ~]# cd /nfs/kubernetes/default-test-claim-pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8/
[root@openEuler-1 default-test-claim-pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8]# ls
SUCCESS
5.5、StateFulDet+volumeClaimTemplates自动创建PV
创建无头服务及statefulset,编辑nginx-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None # 注意此处的值,None表示无头服务
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx-headless" # 与service名称保持一致
replicas: 2 # 两个副本
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: managed-nfs-storage
[root@openEuler-1 ~]# kubectl apply -f nginx-statefulset.yaml
检查结果
[root@openEuler-1 storangeclass]# kubectl get pod -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 7m29s
web-1 1/1 Running 0 7m28s
[root@openEuler-1 storangeclass]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-5ddc5da5-78b8-4211-8e0f-4fc7e315e347 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage <unset> 2m4s
pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8 10Mi RWX Delete Bound default/test-claim managed-nfs-storage <unset> 55m
pvc-af6d1155-0aba-4092-8441-1ac40a509bb9 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage <unset> 2m
[root@openEuler-1 storangeclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-claim Bound pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8 10Mi RWX managed-nfs-storage <unset> 56m
www-web-0 Bound pvc-5ddc5da5-78b8-4211-8e0f-4fc7e315e347 1Gi RWO managed-nfs-storage <unset> 2m7s
www-web-1 Bound pvc-af6d1155-0aba-4092-8441-1ac40a509bb9 1Gi RWO managed-nfs-storage <unset> 2m3s
NFS server上:
[root@openEuler-1 ~]# ll /nfs/kubernetes/
总用量 12
drwxrwxrwx 2 root root 4096 8月 25 15:40 default-test-claim-pvc-77e93cdd-6a8d-4e87-ab21-618e4ee1d9c8
drwxrwxrwx 2 root root 4096 8月 25 16:00 default-www-web-0-pvc-5ddc5da5-78b8-4211-8e0f-4fc7e315e347
drwxrwxrwx 2 root root 4096 8月 25 16:00 default-www-web-1-pvc-af6d1155-0aba-4092-8441-1ac40a509bb9
集群任意节点上:
[root@openEuler-1 kubernetes]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 34m 100.105.212.178 openeuler-3 <none> <none>
web-1 1/1 Running 0 2m32s 100.115.147.102 openeuler-2 <none> <none>
[root@openEuler-1 kubernetes]# echo web0 > default-www-web-0-pvc-5ddc5da5-78b8-4211-8e0f-4fc7e315e347/index.html
[root@openEuler-1 kubernetes]# echo web1 > default-www-web-1-pvc-af6d1155-0aba-4092-8441-1ac40a509bb9/index.html
[root@openEuler-1 kubernetes]# curl 100.115.147.102
web1
[root@openEuler-1 kubernetes]# curl 100.105.212.178
web0