Kubernetes(k8s)搭建:centos7.9离线安装kubernetes(k8s)1.28版本

一、环境准备

三台虚拟机(centos7.9以上)、离线安装包(需要离线安装包,请关注文章最下面公众号,回复:k8s)
在这里插入图片描述

二、所有节点执行

1、解压安装包

tar -zxvf pkg20240805.tar.gz

2、创建/etc/modules-load.d/containerd.conf配置文件

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

3、执行以下命令使配置生效

modprobe overlay
modprobe br_netfilter

4、创建/etc/sysctl.d/99-kubernetes-cri.conf配置文件

# 创建文件
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
# 生效
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

5、加载ipvs内核模块–4.19以上版本内核

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

6、授权生效

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

7、关闭swap,并永久关闭

swapoff -a
sed -i "s/^[^#].*swap/#&/" /etc/fstab

8、关闭防火墙和seliunx

systemctl disable firewalld && systemctl stop firewalld
sed  -i 's/enforcing/disabled/g' /etc/selinux/config && setenforce 0

9、配置/etc/hosts

# 设置主机名称(根据自己设置主机名称)

# (master )
hostnamectl set-hostname master-11  && bash 
# (work )
hostnamectl set-hostname node-12  && bash 
hostnamectl set-hostname node-13  && bash 

# host文件
cat >> /etc/hosts << EOF
192.168.56.11 master-11
192.168.56.12 node-12
192.168.56.13 node-13
EOF

# 配置完之后互相ping一下看通不通

10、安装containerd和cni,主节点和工作节点都要执行

# 进入到我们解压的目录
cd pkg/runc
#直接解压到根目录
tar zxvf cri-containerd-1.6.25-linux-amd64.tar.gz -C /
#生成默认配置文件
mkdir /etc/containerd && containerd config default  > /etc/containerd/config.toml
#修改默认配置
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i 's#registry.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
sed -i 's#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i s/pause:3.6/pause:3.9/g /etc/containerd/config.toml

#启动containerd
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd

# 安装cni
mkdir -p /opt/cni/bin 
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz

# 更新libseccomp
# 查看是否安装并保证版本大于2.3.1
rpm -qa | grep libseccomp
rpm -e  --nodeps libseccomp-2.3.1-4.el7.x86_64
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm

11、安装 kubectl kubeadm kubelet 三剑客

# 注意目录
cd pkg/kubeadm &&  yum localinstall *.rpm -y

12、启动kubelet

systemctl enable kubelet && systemctl start kubelet

13、导入镜像

cd pkg/images && sh images_import.sh

二、Master节点执行

1、kubeadm init

# kubernetes-version:版本
# apiserver-advertise-address:主节点ip;
# pod-network-cidr:pod的ip,可以不指定,没什么影响
kubeadm init --apiserver-advertise-address=192.168.56.11 --kubernetes-version v1.28.1   --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
# 注意【若要重新初始化集群状态:kubeadm reset,然后再进行上述操作】

# 注意,如果提示拉取镜像问题的话,可以打tag
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.1 registry.k8s.io/kube-apiserver:v1.28.1
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1
ctr -n k8s.io images tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9

在这里插入图片描述

2、添加证书

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 保留方便后续加入work节点
kubeadm join 192.168.56.11:6443 --token 4ors48.0ph175l15lqw5bez \
	--discovery-token-ca-cert-hash sha256:ac3ecc79034c8d7ed3240f3d6a354441a84a496d51f4705da98c30c58c2a4ecc

三、Work节点执行

1、其他节点加入

kubeadm join 192.168.56.11:6443 --token 4ors48.0ph175l15lqw5bez \
	--discovery-token-ca-cert-hash sha256:ac3ecc79034c8d7ed3240f3d6a354441a84a496d51f4705da98c30c58c2a4ecc

四、Master节点执行

1、部署网络插件

# 一定确保全部Running状态,再安装网络插件!(除了前两个)
[root@master-11 images]# kubectl get pods -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-6554b8b87f-p8vp9            0/1     Pending   0          4m11s
coredns-6554b8b87f-wdmgn            0/1     Pending   0          4m11s
etcd-master-11                      1/1     Running   0          4m23s
kube-apiserver-master-11            1/1     Running   0          4m23s
kube-controller-manager-master-11   1/1     Running   0          4m23s
kube-proxy-9s57h                    1/1     Running   0          2m11s
kube-proxy-ksqm6                    1/1     Running   0          4m11s
kube-proxy-q5dxn                    1/1     Running   0          2m19s
kube-scheduler-master-11            1/1     Running   0          4m23s


cd pkg/calico && kubectl apply -f calico.yaml

五、检验是否安装成功

[root@master-11 calico]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7c968b5878-v8djq   1/1     Running   0          2m51s
calico-node-6vxq8                          1/1     Running   0          2m51s
calico-node-rc4z8                          1/1     Running   0          2m51s
calico-node-sx2z7                          1/1     Running   0          2m51s
coredns-6554b8b87f-p8vp9                   1/1     Running   0          7m18s
coredns-6554b8b87f-wdmgn                   1/1     Running   0          7m18s
etcd-master-11                             1/1     Running   0          7m30s
kube-apiserver-master-11                   1/1     Running   0          7m30s
kube-controller-manager-master-11          1/1     Running   0          7m30s
kube-proxy-9s57h                           1/1     Running   0          5m18s
kube-proxy-ksqm6                           1/1     Running   0          7m18s
kube-proxy-q5dxn                           1/1     Running   0          5m26s
kube-scheduler-master-11                   1/1     Running   0          7m30s

[root@master-11 calico]# kubectl get nodes
NAME        STATUS   ROLES           AGE     VERSION
master-11   Ready    control-plane   7m56s   v1.28.1
node-12     Ready    <none>          5m49s   v1.28.1
node-13     Ready    <none>          5m40s   v1.28.1

六、安装kubord

官网:https://www.kuboard.cn/install/v3/install-in-k8s.html#%E5%AE%89%E8%A3%85

1、打标签

# 打标签 只给主打即可
kubectl label nodes master-11  k8s.kuboard.cn/role=etcd

2、编辑kuboard-v3.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: kuboard

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kuboard-v3-config
  namespace: kuboard
data:
  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-built-in.html
  # [common]
  KUBOARD_SERVER_NODE_PORT: '30080'
  KUBOARD_AGENT_SERVER_UDP_PORT: '30081'
  KUBOARD_AGENT_SERVER_TCP_PORT: '30081'
  KUBOARD_SERVER_LOGRUS_LEVEL: info  # error / debug / trace
  # KUBOARD_AGENT_KEY 是 Agent 与 Kuboard 通信时的密钥,请修改为一个任意的包含字母、数字的32位字符串,此密钥变更后,需要删除 Kuboard Agent 重新导入。
  KUBOARD_AGENT_KEY: 32b7d6572c6255211b4eec9009e4a816
  KUBOARD_AGENT_IMAG: registry.cn-hangzhou.aliyuncs.com/cxfpublic/kuboard-agent
  KUBOARD_QUESTDB_IMAGE: registry.cn-hangzhou.aliyuncs.com/cxfpublic/questdb:6.0.5
  KUBOARD_DISABLE_AUDIT: 'false' # 如果要禁用 Kuboard 审计功能,将此参数的值设置为 'true',必须带引号。

  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-gitlab.html
  # [gitlab login]
  # KUBOARD_LOGIN_TYPE: "gitlab"
  # KUBOARD_ROOT_USER: "your-user-name-in-gitlab"
  # GITLAB_BASE_URL: "http://gitlab.mycompany.com"
  # GITLAB_APPLICATION_ID: "7c10882aa46810a0402d17c66103894ac5e43d6130b81c17f7f2d8ae182040b5"
  # GITLAB_CLIENT_SECRET: "77c149bd3a4b6870bffa1a1afaf37cba28a1817f4cf518699065f5a8fe958889"
  
  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-github.html
  # [github login]
  # KUBOARD_LOGIN_TYPE: "github"
  # KUBOARD_ROOT_USER: "your-user-name-in-github"
  # GITHUB_CLIENT_ID: "17577d45e4de7dad88e0"
  # GITHUB_CLIENT_SECRET: "ff738553a8c7e9ad39569c8d02c1d85ec19115a7"

  # 关于如下参数的解释,请参考文档 https://kuboard.cn/install/v3/install-ldap.html
  # [ldap login]
  # KUBOARD_LOGIN_TYPE: "ldap"
  # KUBOARD_ROOT_USER: "your-user-name-in-ldap"
  # LDAP_HOST: "ldap-ip-address:389"
  # LDAP_BIND_DN: "cn=admin,dc=example,dc=org"
  # LDAP_BIND_PASSWORD: "admin"
  # LDAP_BASE_DN: "dc=example,dc=org"
  # LDAP_FILTER: "(objectClass=posixAccount)"
  # LDAP_ID_ATTRIBUTE: "uid"
  # LDAP_USER_NAME_ATTRIBUTE: "uid"
  # LDAP_EMAIL_ATTRIBUTE: "mail"
  # LDAP_DISPLAY_NAME_ATTRIBUTE: "cn"
  # LDAP_GROUP_SEARCH_BASE_DN: "dc=example,dc=org"
  # LDAP_GROUP_SEARCH_FILTER: "(objectClass=posixGroup)"
  # LDAP_USER_MACHER_USER_ATTRIBUTE: "gidNumber"
  # LDAP_USER_MACHER_GROUP_ATTRIBUTE: "gidNumber"
  # LDAP_GROUP_NAME_ATTRIBUTE: "cn"

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kuboard-boostrap
  namespace: kuboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kuboard-boostrap-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kuboard-boostrap
  namespace: kuboard

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    k8s.kuboard.cn/name: kuboard-etcd
  name: kuboard-etcd
  namespace: kuboard
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.kuboard.cn/name: kuboard-etcd
  template:
    metadata:
      labels:
        k8s.kuboard.cn/name: kuboard-etcd
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: Exists
              - matchExpressions:
                  - key: node-role.kubernetes.io/control-plane
                    operator: Exists
              - matchExpressions:
                  - key: k8s.kuboard.cn/role
                    operator: In
                    values:
                      - etcd
      containers:
        - env:
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: HOSTIP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.hostIP
          image: 'registry.cn-hangzhou.aliyuncs.com/cxfpublic/etcd-host:3.4.16-1'
          imagePullPolicy: Always
          name: etcd
          ports:
            - containerPort: 2381
              hostPort: 2381
              name: server
              protocol: TCP
            - containerPort: 2382
              hostPort: 2382
              name: peer
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /health
              port: 2381
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          volumeMounts:
            - mountPath: /data
              name: data
      dnsPolicy: ClusterFirst
      hostNetwork: true
      restartPolicy: Always
      serviceAccount: kuboard-boostrap
      serviceAccountName: kuboard-boostrap
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
      volumes:
        - hostPath:
            path: /usr/share/kuboard/etcd
          name: data
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate


---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations: {}
  labels:
    k8s.kuboard.cn/name: kuboard-v3
  name: kuboard-v3
  namespace: kuboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.kuboard.cn/name: kuboard-v3
  template:
    metadata:
      labels:
        k8s.kuboard.cn/name: kuboard-v3
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: Exists
              weight: 100
            - preference:
                matchExpressions:
                  - key: node-role.kubernetes.io/control-plane
                    operator: Exists
              weight: 100
      containers:
        - env:
            - name: HOSTIP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.hostIP
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          envFrom:
            - configMapRef:
                name: kuboard-v3-config
          image: 'registry.cn-hangzhou.aliyuncs.com/cxfpublic/kuboard:v3'
          imagePullPolicy: Always
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /kuboard-resources/version.json
              port: 80
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: kuboard
          ports:
            - containerPort: 80
              name: web
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
            - containerPort: 10081
              name: peer
              protocol: TCP
            - containerPort: 10081
              name: peer-u
              protocol: UDP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /kuboard-resources/version.json
              port: 80
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: {}
          # startupProbe:
          #   failureThreshold: 20
          #   httpGet:
          #     path: /kuboard-resources/version.json
          #     port: 80
          #     scheme: HTTP
          #   initialDelaySeconds: 5
          #   periodSeconds: 10
          #   successThreshold: 1
          #   timeoutSeconds: 1
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      serviceAccount: kuboard-boostrap
      serviceAccountName: kuboard-boostrap
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists

---
apiVersion: v1
kind: Service
metadata:
  annotations: {}
  labels:
    k8s.kuboard.cn/name: kuboard-v3
  name: kuboard-v3
  namespace: kuboard
spec:
  ports:
    - name: web
      nodePort: 30080
      port: 80
      protocol: TCP
      targetPort: 80
    - name: tcp
      nodePort: 30081
      port: 10081
      protocol: TCP
      targetPort: 10081
    - name: udp
      nodePort: 30081
      port: 10081
      protocol: UDP
      targetPort: 10081
  selector:
    k8s.kuboard.cn/name: kuboard-v3
  sessionAffinity: None
  type: NodePort

3、部署

kubectl apply -f kuboard-v3.yaml

# 执行指令 watch kubectl get pods -n kuboard,等待 kuboard 名称空间中所有的 Pod 就绪
kubectl get pods -n kuboard

4、登录

安装Kuboard的地址,也是master节点IP:
http://192.168.56.11:30080/

用户名:admin 密码:Kuboard123

<think>好的,我现在需要帮助用户找到在CentOS系统上离线部署Kubernetes集群的教程。用户已经提供了几个引用,我需要仔细分析这些引用中的信息,整理出一个清晰的步骤指南,并生成相关问题。 首先,看看用户提供的引用内容。引用[1]提到启用Kubernetes服务时使用systemctl命令,引用[2]和[3]都涉及具体的安装步骤和镜像版本,而引用[4]详细列出了环境准备和各个节点的操作步骤。特别是引用[4]的结构很清晰,分成了环境准备、Master节点执行、Work节点执行等部分,这可能是一个比较完整的教程。 我需要确认这些引用中的步骤是否兼容,比如Kubernetes版本是否一致。例如,引用[3]使用的是v1.20.9,而引用[4]是1.28版本,可能存在差异。用户可能需要明确版本的选择,但根据问题描述,用户可能希望得到通用的步骤,同时需要指出不同版本的注意事项。 接下来,整理步骤时应该包括环境准备、安装前配置、安装containerd、配置节点、初始化集群、加入工作节点、安装网络插件以及验证安装。需要确保每个步骤中的命令正确,并注意离线安装需要提前下载镜像包。 同时,用户可能需要了解如何获取离线安装包和镜像,比如引用[2]提到的项目地址可能提供相关资源。需要提醒用户根据实际版本调整操作,特别是不同Kubernetes版本可能需要的配置差异。 在生成相关问题部分,要考虑用户可能遇到的后续问题,例如如何检查集群状态、处理节点加入失败、升级集群版本、选择网络插件以及资源需求等。这些问题能帮助用户进一步解决实际部署中的困难。 最后,确保回答符合系统级指令的要求,正确使用LaTeX格式,结构清晰,引用标识正确。需要检查是否有遗漏的步骤或者版本冲突的地方,并给出相应的建议,比如推荐使用较新版本的教程作为参考,同时注意兼容性。</think>### CentOS离线环境下Kubernetes集群部署教程 #### 一、环境准备 1. **系统要求** - CentOS 7.9或更高版本(推荐使用最小化安装) - 所有节点需关闭防火墙、SELinux和Swap: ```bash $ sudo systemctl stop firewalld && sudo systemctl disable firewalld $ sudo setenforce 0 && sed -i &#39;s/SELINUX=enforcing/SELINUX=disabled/&#39; /etc/selinux/config $ sudo swapoff -a && sed -i &#39;/ swap / s/^/#/&#39; /etc/fstab ``` 2. **离线资源准备** - 下载Kubernetes离线安装包(如引用[4]中的1.28版本或引用[3]中的1.20.9版本) - 包含以下组件: - `kubeadm`、`kubelet`、`kubectl` - `containerd`或`docker`运行时 - 预加载镜像包(如引用[3]中的13个基础镜像) #### 二、安装配置步骤 1. **所有节点通用操作** - 加载内核模块并配置系统参数: ```bash $ sudo modprobe overlay && sudo modprobe br_netfilter $ echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee /etc/sysctl.d/k8s.conf $ sudo sysctl --system ``` - 安装容器运行时(以containerd为例): ```bash $ tar -zxvf containerd.tar.gz -C /usr/local/bin/ $ sudo systemctl enable containerd ``` 2. **安装Kubernetes组件** - 使用离线RPM包安装三件套: ```bash $ sudo rpm -ivh kubelet-*.rpm kubeadm-*.rpm kubectl-*.rpm $ sudo systemctl enable kubelet ``` 3. **主节点初始化** - 使用kubeadm初始化控制平面: ```bash $ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 \ --image-repository=本地镜像路径 \ --apiserver-advertise-address=<主节点IP> ``` - 记录输出的`kubeadm join`命令供工作节点使用[^4] 4. **网络插件部署** - 加载Calico或Flannel镜像: ```bash $ docker load -i calico-images.tar $ kubectl apply -f calico.yaml ``` #### 三、验证安装 ```bash $ kubectl get nodes # 应显示所有节点状态为Ready $ kubectl get pods -A # 检查核心组件运行状态 ```
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

秃了也弱了。

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值