引言
K8s存储一直是一个比较头痛的情况,当前使用NFS单点存储来应付简单的应用需求,方便pod在不同主机间漂移。
NFS存储服务搭建
目前我使用的系统是Debian 12.8,ip地址为:10.0.0.4
root@nnas:/data# cat /etc/debian_version
12.8
root@nnas:/data# uname -a
Linux nnas 6.1.0-27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.115-1 (2024-11-01) x86_64 GNU/Linux
root@nnas:~# root@nnas:~# ip -4 addr
...
inet 10.0.0.4/24 brd 10.0.0.255 scope global dynamic ens18
...
1. 首先安装相关软件包
root@nnas:/data# apt install nfs-kernel-server rpcbind
设置Rpcbind允许本地网络链接
此处是为兼容NFSv2、NFSv3版本,本地网络号段为10.0.0.
root@nnas:/data# perl -pi -e 's/^OPTIONS/#OPTIONS/' /etc/default/rpcbind
root@nnas:/data# echo "rpcbind: 10.0.0." >> /etc/hosts.allow
root@nnas:/data# systemctl restart rpcbind.service
共享目录暂定为/data
,回顾一下当前K8s的主机
10.0.0.1 cuipi
10.0.0.2 jiangjiang
10.0.0.3 peento
2. 接下来我们来设置相关共享目录和主机的配置
root@nnas:/data# cat /etc/exports
/data 10.0.0.1(rw,sync,no_root_squash,no_subtree_check) 10.0.0.2(rw,sync,no_root_squash,no_subtree_check) 10.0.0.3(rw,sync,no_root_squash,no_subtree_check)
root@nnas:/data# exportfs -a
root@nnas:/data# /etc/init.d/nfs-kernel-server reload
注 (rw,sync,no_root_squash,no_subtree_check) 可读/可写,开启同步挂载(防止缓存丢失数据造成服务重启后异常),开启客户端<->服务器用户权限同步,并使用 ‘subtree_check’ 来静音警告消息。
3. 这时我们登录K8s主机安装客户端进行访问验证
root@jiangjiang:/mnt# apt install nfs-common
root@jiangjiang:/mnt# mkdir tmp
root@jiangjiang:/mnt# mount 10.0.0.4:/data /mnt/tmp
可以看到已经正常挂载使用
配置K8s集群使用NFS存储
1. 通过helm安装使用特定的存储资源提供方
root@cuipi:~# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
root@cuipi:~# helm install nfs-client -n kube-system --set nfs.server=10.0.0.4 --set nfs.path=/data /nfs-subdir-external-provisioner/nfs-subdir-external-provisioner
NAME: nfs-client
LAST DEPLOYED: Thu Nov 3 06:24:42 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@cuipi:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
.....
nfs-client-nfs-subdir-external-provisioner-fc65bd7d7-65khg 1/1 Running 0 23s
2. 创建K8s资源
创建一个K8s PVC资源 pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: share-data
namespace: default
labels:
app: share-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
注意:NFS存储的资源限制是无效的,也就说存储空间可以无视限制增长
创建一个pod资源进行读写验证 pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: share-data-test
namespace: default
spec:
containers:
- name: share-data-test
image: busybox:stable
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: share-data
3. 部署两个资源进行验证
部署
root@cuipi:~# kubectl apply -f pvc.yaml pod.yaml
root@cuipi:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
...
share-data-test 0/1 Completed 0 23s
验证
root@nnas:~# cd /data/default-share-data-pvc-*
soot@nnas:/data/default-share-data-pvc-d25352fb-10be-****-****-************# ls
SUCCESS