本文记录使用kubeadm管理工具安装部署kubernetes集群的过程。本集群包含两台主机,一台master节点,一台node节点。若有多台node节点,依照加入集群的步骤把其余主机依次加入集群即可。
说明:网络限制是本次安装的难点,我们的机器都在公司内网,仅可以使用代理连接到外网。本次部署过程中,一共有4处地方涉及到外网连接:yum install、docker pull、helm install 是配置代理连接;kubectl apply 是把远程网络上的配置文件下载到本地之后,再本地读取。
1. 系统信息
本次我们搭建一个只有一个master节点、一个node节点的简单集群。
主机:
- master节点:ac3-node06 = 10.129.5.77
- node节点:ac3-node06= 10.129.5.78
软件版本:
- 操作系统:CentOS7.3
- kubernetes:v1.23.1
- kubeadm:v1.23.1
2. 安装部署
2.1 安装前准备
为了减少安装过程中遇到的报错等麻烦,两台主机做好如下的准备工作。
2.1.1 配置hosts
[root@ac3-node06 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
10.129.5.77 ac3-node06
10.129.5.78 ac3-node05
[root@ac3-node06 ~]#
2.1.2 禁用防火墙
[root@ac3-node06 ~]# systemctl disable iptables-services firewalld
2.1.3 关闭swap缓存
2.2 安装kubeadm、kubelet、kubectl
这个步骤要在两台主机上分别安装kubeadm、kubelet、kubectl。
- kubeadm:引导集群的命令行工具。
- kubelet:在集群中的所有机器上运行并执行诸如启动 pod 和容器之类的操作的组件。
- kubectl:与您的集群对话的命令行工具。
注意:三者在安装时,要考虑版本兼容问题。
2.2.1 配置内核参数
sudo modprobe br_netfilter
lsmod | grep br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
2.2.2 安装kubeadmin、kubelet、kubectl
# 配置镜像源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# 设置setenforce
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 这里因为我们是内网服务器,需要配置yum使用代理上网
vi /etc/yum.conf
添加一行
proxy=http://proxy_user:proxy_password@proxy_hostname:port
# yum安装kubelet kubeadm kubectl
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 设置开机自启动
sudo systemctl enable --now kubelet
# 启动后,kubelet每隔几秒重启一次,循环等待kubeadm下发指示。
2.3 使用kubeadm来创建kubernetes集群(master节点)
如下步骤只需要在master主机上操作,node主机不需要。
2.3.1 启动Docker服务
kubeadm初始化之前,需要先启动Docker服务,初始化时,docker pull联网拉取镜像。
- 启动Docker服务
# 启动docker
systemctl enable docker.service
systemctl start docker.service
- 给Docker配置代理(若有需要)
# 创建配置文件路径
mkdir -p /etc/systemd/system/docker.service.d
# 配置http代理
vi /etc/systemd/system/docker.service.d/http-proxy.conf
写入
[Service]
Environment="HTTP_PROXY=http://proxy_user:proxy_password@proxy_hostname:port" "NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16"
# 配置https代理
vi /etc/systemd/system/docker.service.d/https-proxy.conf
写入
[Service]
Environment="HTTPS_PROXY=http://proxy_user:proxy_password@proxy_hostname:port"
"NO_PROXY=localhost,*.samwong.im,192.168.0.0/16,127.0.0.1,10.244.0.0/16"
# 重启docker
systemctl daemon-reload && systemctl restart docker
# 查看代理生效
docker info
2.3.2 初始化kubeadm
初始化kubeadm
kubeadm init --help
# --apiserver-advertise-address:master本机IP
# --image-repository:指定阿里云镜像仓库地址
# --service-cidr:为service VIPs使用不同的IP地址(默认10.96.0.0/12)
# --pod-network-cidr:POD的网段
# 初始化
kubeadm init --kubernetes-version=1.23.1 \
--apiserver-advertise-address=10.129.5.77 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
......
# 初始化成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.129.5.77:6443 --token p6kb2h.ndduahfcvppw71x5 \
--discovery-token-ca-cert-hash sha256:d02ac1b5a5cb359131d7ff43cd5d88ad58fb7e6412bb66cec28f168be0bf9924
[root@ac3-node06 ~]#
根据上面初始化成功的提示,在开始使用集群之前,需要执行命令(生产环境使用非root按上面的提示操作)
[root@ac3-node06 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
现在就可以使用kubectl命令访问集群了
[root@ac3-node06 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
[root@ac3-node06 ~]# source /etc/profile
[root@ac3-node06 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ac3-node06 NotReady control-plane,master 46m v1.23.1
只不过现在master节点还是NotReady状态,这是因为默认情况下,为了保证master的安全,master是不会被分配工作负载的。在安装了下面的网络插件之后,master状态可以变为Ready。
2.3.3 安装Pod网络插件(一般都是CNI)
Pod的网络插件有很多种,参见官方文档。本文选择部署Flannel网络插件,让Pod内部服务之间可以相互通讯。注意,每个节点都要安装。
# 这真是曲折的一步啊,因为我的服务器在内网,连不上flannel.yml服务器,命令执行一直报错
[root@ac3-node06 ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
[root@ac3-node06 ~]#
# 最后解决的办法是,先去可以上网的电脑吧yml文件下载下来,放到master服务器上,再本地读取执行
[root@ac3-node06 downloads]# ls -ltr kube-flannel.yml
-rw-r--r-- 1 root root 5199 Jan 15 16:54 kube-flannel.yml
[root@ac3-node06 downloads]# kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@ac3-node06 downloads]#
# flannel插件安装完成
# 此时,master已经是Ready状态了
[root@ac3-node06 downloads]# kubectl get node
NAME STATUS ROLES AGE VERSION
ac3-node06 Ready control-plane,master 118m v1.23.1
2.4 向kubernetes集群添加node节点
kubernetes集群master节点状态已经OK,接下来我们把另一台node节点加入到集群中。
2.4.1 kubeadm join 添加节点
去node节点上执行kubeadm join命令,将node节点加入kubernetes集群
[root@ac3-node05 ~]# kubeadm join 10.129.5.77:6443 --token p6kb2h.ndduahfcvppw71x5 --discovery-token-ca-cert-hash sha256:d02ac1b5a5cb359131d7ff43cd5d88ad58fb7e6412bb66cec28f168be0bf9924
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@ac3-node05 ~]#
节点加入集群成功,根据上面的成功提示,我们可以回到master节点,使用kubectl get nodes命令查看node节点的加入状况
[root@ac3-node06 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
ac3-node05 Ready <none> 17m v1.23.1
ac3-node06 Ready control-plane,master 118m v1.23.1
[root@ac3-node06 ~]#
Node节点加入集群成功,状态正常。
2.5 安装Dashboard插件
上面所有的操作,我们都是在linux主机上,通过命令行来执行的,对整个集群没有一个整体的认识,操作起来也不便利。此时,我们需要一个可视化的操作界面,来改善这个问题:Dashboard插件。
Kubernetes Dashboard 是用于 Kubernetes 集群的通用、基于 Web 的 UI。 它允许用户管理集群中运行的应用程序并对其进行故障排除,以及管理集群本身。它需要去master节点上安装。
# 安装插件
[root@ac3-node06 downloads]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
[root@ac3-node06 downloads]#
# 看到没,报错了!没错,内网服务器还是连接不了https://raw.githubusercontent.com,仍然使用上面安装flannel插件的办法,把yaml配置文件(https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml)先下载到本地,再本地读取加载
# 下载yaml到本地,本地读取加载
[root@ac3-node06 downloads]# ls -ltr recommended.yaml
-rw-r--r-- 1 root root 7543 Jan 15 17:07 recommended.yaml
# 这里我们多做一步:由于默认Dashboard只能集群内部访问,使用默认配置安装的dashboard插件,只允许我们通过集群local master主机的浏览器来访问集群的UI,对我们来说太不方便了。因此修改Service为NodePort类型,让UI暴露到集群外部可以访问
[root@ac3-node06 downloads]# vi recommended.yaml
# 添加两行配置
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort ---- 添加一行
ports:
- port: 443
targetPort: 8443
nodePort: 30001 ---- 添加一行
selector:
k8s-app: kubernetes-dashboard
# apply本地yaml文件,再次安装
[root@ac3-node06 downloads]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
service/kubernetes-dashboard configured
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
Warning: resource secrets/kubernetes-dashboard-key-holder is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
deployment.apps/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
deployment.apps/dashboard-metrics-scraper unchanged
[root@ac3-node06 downloads]#
# 查看pod中是否有dashboard
[root@ac3-node06 downloads]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d8c4cb4d-bm2sz 1/1 Running 0 163m
kube-system coredns-6d8c4cb4d-rtxkk 1/1 Running 0 163m
kube-system etcd-ac3-node06 1/1 Running 0 163m
kube-system kube-apiserver-ac3-node06 1/1 Running 0 163m
kube-system kube-controller-manager-ac3-node06 1/1 Running 0 163m
kube-system kube-flannel-ds-sqgsj 1/1 Running 0 50m
kube-system kube-flannel-ds-tl5x6 1/1 Running 0 50m
kube-system kube-proxy-75fdb 1/1 Running 0 63m
kube-system kube-proxy-s56vm 1/1 Running 0 163m
kube-system kube-scheduler-ac3-node06 1/1 Running 0 163m
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-n2rch 1/1 Running 0 36m
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-tm5k4 1/1 Running 0 36m
[root@ac3-node06 downloads]#
# 最下面两行,dashbaord有了。查看dashboard信息
[root@ac3-node06 downloads]# kubectl describe service kubernetes-dashboard --namespace=kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.104.204.178
IPs: 10.104.204.178
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 30001/TCP
Endpoints: 10.244.1.3:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
[root@ac3-node06 downloads]#
# 查看dashboard部署在哪个节点上
[root@ac3-node06 downloads]# kubectl get pods --namespace=kubernetes-dashboard -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dashboard-metrics-scraper-799d786dbf-n2rch 1/1 Running 0 38m 10.244.1.4 ac3-node05 <none> <none>
kubernetes-dashboard-6b6b86c4c5-tm5k4 1/1 Running 0 38m 10.244.1.3 ac3-node05 <none> <none>
[root@ac3-node06 downloads]#
此时,Dashboard插件安装完成,我们就可以通过浏览器去访问了:https://master主机名:30001
然后创建登陆dashboard的token令牌
# 创建serviceaccount账户
[root@ac3-node06 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@ac3-node06 downloads]# kubectl get sa -n kubernetes-dashboard
NAME SECRETS AGE
default 1 48m
kubernetes-dashboard 1 48m
[root@ac3-node06 ~]#
# 把serviceaccount绑定在clusteradmin,授权serviceaccount用户具有整个集群的访问管理权限
[root@ac3-node06 downloads]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@ac3-node06 downloads]#
# 获取serviceaccount的secret信息,可得到token的信息
[root@ac3-node06 ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-7fh58
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 57c6a4f7-ad88-4702-aad0-9e2a0215dacc
Type: kubernetes.io/service-account-token
Data
====
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImZYdGFKLTFXRDVGbkFXa3NrWi1MRk95VVJPY2xwQl92RzlsX3VwbXFsR0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tN2ZoNTgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTdjNmE0ZjctYWQ4OC00NzAyLWFhZDAtOWUyYTAyMTVkYWNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.SRvzZxH0gzo8fG1MmRQZGOgi3PnyQ79XKTKx4J8x9nlwolbcBo66dU_LaDEVgrzSx3r9D5M6a9ISBXjhGv9teTZ5MeoAB84y327zRXEOEWeMUG6mnXXAE7X1qmMHTNlbLLpRNC8XF3STW2Swq3PhCrolGIBfwUe46Kh9PbBXxQzbLmdmUOxqxy4eX_b2xqcmVY9bGXuw2Zthr0Ue5XGqFRzVW7FvN678IDsoZszrxA5oo1qKdKPCWuuk2zx6eQqsV5an0Uq_BugGpc_tjhW_DvPuB9n2d6nHBdDokBiustiKNpr9Cm_LL5EYepDmxzq4dguDgU_J3OZpvZCy2xBM3w
ca.crt: 1099 bytes
[root@ac3-node06 downloads]#
# 这里创建的token,就是我们要用来在Dashboard UI上面登陆使用的Token。我们不用记它,使用下面的命令可以快捷查看
# 快捷查看token的命令
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/admin/{print $1}')
接下来去浏览器的登录页输入token,即可成功登入
至此,DashBoard插件安装完成,以后的集群管理和应用部署,就都可以在这个界面来管理了。
3. 部署容器化应用
kubernetes集群搭建完以后,部署容器化应用的方式有几种,可以去master主机上通过命令行的方式来部署,也可以使用helm包管理工具部署,还可以在Dashboard界面通过手动添加的方式来部署。
3.1 使用命令行的方式部署容器化应用
用在没有Dashboard时,我们可以去master主机上,用命令行的方式来创建容器化应用。下面我们来部署一个MySQL服务。
3.1.1 创建服务Service
创建一个Service,为即将部署的MySQL数据库固定连接的IP,同时提供负载均衡。
新建mysql-service配置文件
[root@ac3-node06 downloads]# vi mysql-service.yaml
写入下面的内容
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
# 创建资源
[root@ac3-node06 downloads]# kubectl create -f mysql-service.yaml
service/mysql created
[root@ac3-node06 downloads]#
上面这个配置创建了一个名称为mysql的Service对象,它会将请求代理到 使用TCP端口3306、并且具有标签app=mysql的Pod上。
3.1.2 创建持久卷PV
创建一个MySQL的持久卷mysql-pv.yaml(当Pod不再存在时,Kubernetes也会销毁临时卷,不过持久卷不会被销毁)。
# 新建pv配置文件
[root@ac3-node06 downloads]# vi mysql-pv.yaml
写入下面的内容
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce # 卷可以被一个节点以读写方式挂载
hostPath:
path: "/mnt/data"
# 创建资源
[root@ac3-node06 downloads]# kubectl create -f mysql-pv.yaml
3.1.3 创建持久卷声明PVC
持久卷是集群中的资源,而持久卷声明是对这些资源的请求,是被用来执行对资源的声明检查。下面我们将创建名称为mysql-pvc的持久卷声明 mysql-pvc.yaml
# 新建pvc配置文件
[root@ac3-node06 downloads]# vi mysql-pvc.yaml
写入下面的内容
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
# 创建资源
[root@ac3-node06 downloads]# kubectl create -f mysql-pvc.yaml
3.1.4 部署MySQL
在3306端口上使用MySQL5.7的镜像创建Pod,mysql-deployment.yaml
# 新建deployment配置文件
[root@ac3-node06 downloads]#vi mysql-deployment.yaml
写入配置
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD # 生产环境中请使用 secret
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-pvc
# 创建资源
[root@ac3-node06 downloads]# kubectl create -f mysql-deployment.yaml
这时候MySQL就部署好了,我们可以使用命令行来连接MySQL了。
[root@ac3-node06 downloads]# kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -hmysql -ppassword
3.2 使用Helm来部署集成好的常规应用
Helm又被称为kubernetes界的yum,它是kubernetes的包管理器。常规的基础服务可以通过Helm来部署。
Helm包含两个组件,Helm客户端和Tiller服务器。Helm客户端负责chart和release的创建和管理以及和Tiller的交互。Tiller服务器运行在kubernetes集群中,它会处理Helm客户端的请求,与kubernetes API Server交互。
3.2.1 Helm部署
Helm的客户端安装很简单,只要下载helm源码包并解压到master节点的/usr/local/bin/目录下即可
# helm安装文档:https://helm.sh/docs/intro/install/
# 下载helm,从https://github.com/helm/helm/releases选择与kubernetes兼容的版本下载(helm和kubernetes的版本对照关系:https://helm.sh/docs/topics/version_skew/)
# 这里因为我们服务器在内网,wget使用了代理
[root@ac3-node06 downloads]# wget https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz -e use_proxy=yes -e http_proxy=http://username:password@yourproxy.com:port --no-check-certificate
# 解压helm
[root@ac3-node06 downloads]# tar -zxvf helm-v3.7.2-linux-amd64.tar.gz
# 把helm移动到bin目录下
[root@ac3-node06 downloads]# mv linux-amd64/helm /usr/local/bin/helm
[root@ac3-node06 downloads]# helm version
version.BuildInfo{Version:"v3.7.2", GitCommit:"663a896f4a815053445eec4153677ddc24a0a361", GitTreeState:"clean", GoVersion:"go1.16.10"}
[root@ac3-node06 downloads]#
# 初始化Helm Chart仓库
# 这里还是因为我们是内网机器,先设置代理,让helm可以通过代理联网
[root@ac3-node06 ~]# export http_proxy=http://proxy_user:proxy_password@proxy_hostname:port
[root@ac3-node06 ~]# export https_proxy=http://proxy_user:proxy_password@proxy_hostname:port
# 初始化Helm Chart仓库
[root@ac3-node06 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
[root@ac3-node06 ~]#
# 初始化完成,查看所有chart列表
[root@ac3-node06 ~]# helm search repo bitnami
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/bitnami-common 0.0.9 0.0.9 DEPRECATED Chart with custom templates used in ...
bitnami/airflow 11.3.0 2.2.3 Apache Airflow is a platform to programmaticall...
bitnami/apache 9.0.0 2.4.52 Chart for Apache HTTP Server
...
[root@ac3-node06 ~]#
# 取消全局代理
[root@ac3-node06 ~]# unset http_proxy
[root@ac3-node06 ~]# unset https_proxy
Helm安装完成,这时候我们就可以使用Helm来部署MySQL服务了。
3.2.2 部署MySQL
特别说明:MySQL的helm部署需要使用pv,而开源的k8s集群默认没有配置pv后端存储,因此需要手动添加nfs服务器为后端存储,再执行下面的操作。不然直接helm install安装的话,会报错:"0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims."(本文档未记录配置k8s集群pv自动供给的过程,毕竟是一个麻烦的事...)
下面的步骤是在k8s集群pv自动供给配置完毕之后,开始部署MySQL的过程:
在部署MySQL之前,更新下Helm Chart仓库
[root@ac3-node06 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
[root@ac3-node06 ~]#
安装MySQL
# 查看helm源里面的MySQL包
[root@ac3-node06 ~]# helm search repo mysql
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/mysql 8.8.21 8.0.27 Chart to create a Highly available MySQL cluster
bitnami/phpmyadmin 9.0.0 5.1.1 phpMyAdmin is an mysql administration frontend
bitnami/mariadb 10.3.1 10.5.13 Fast, reliable, scalable, and easy to use open-...
bitnami/mariadb-cluster 1.0.2 10.2.14 DEPRECATED Chart to create a Highly available M...
bitnami/mariadb-galera 6.2.0 10.6.5 MariaDB Galera is a multi-master database clust...
[root@ac3-node06 ~]#
# 安装MySQL
[root@ac3-node06 ~]# helm install bitnami/mysql --generate-name
Error: INSTALLATION FAILED: failed to download "bitnami/mysql"
[root@ac3-node06 ~]#
它报错了,下载MySQL失败,原因是helm install是要联网下载的,我们是内网服务器,需要设置代理。
设置全局代理,再试
[root@ac3-node06 ~]# export http_proxy=http://proxy_user:proxy_password@proxy_hostname:port
[root@ac3-node06 ~]# export https_proxy=http://proxy_user:proxy_password@proxy_hostname:port
[root@ac3-node06 ~]# helm install bitnami/mysql --generate-name
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: 6443/version?timeout=32s": Forbidden
[root@ac3-node06 ~]#
这回报错不一样了:连不上kubernetes集群。
查很多资料,有说是helm要设置k3s的KUBECONFIG的,有说是chart源码bug的,都没有解决我的问题。后来翻到一篇文章,博主讲到是因为设置了全局代理导致连不上内网的kubernetes集群。这时想到,设置全局代理的时候把kubernetes集群master主机的IP地址给过掉。
试试,这回可以了
[root@ac3-node06 ~]# export https_proxy=http://proxy_user:proxy_password@proxy_hostname:port no_proxy=10.129.5.77
[root@ac3-node06 ~]# export http_proxy=http://proxy_user:proxy_password@proxy_hostname:port no_proxy=10.129.5.77
[root@ac3-node06 ~]# helm install bitnami/mysql --generate-name
NAME: mysql-1642489096
LAST DEPLOYED: Tue Jan 18 14:58:20 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mysql
CHART VERSION: 8.8.21
APP VERSION: 8.0.27
** Please be patient while the chart is being deployed **
Tip:
Watch the deployment status using the command: kubectl get pods -w --namespace default
Services:
echo Primary: mysql-1642489096.default.svc.cluster.local:3306
Execute the following to get the administrator credentials:
echo Username: root
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1642489096 -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
To connect to your database:
1. Run a pod that you can use as a client:
kubectl run mysql-1642489096-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.27-debian-10-r78 --namespace default --command -- bash
2. To connect to primary service (read/write):
mysql -h mysql-1642489096.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
To upgrade this helm chart:
1. Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown below:
ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1642489096 -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
helm upgrade --namespace default mysql-1642489096 bitnami/mysql --set auth.rootPassword=$ROOT_PASSWORD
[root@ac3-node06 ~]#
# 查看mysql部署的情况
[root@ac3-node06 ~]# kubectl get pods -w --namespace default
NAME READY STATUS RESTARTS AGE
mysql-1642489096-0 0/1 Running 0 13m
[root@ac3-node06 ~]#
这时候回到Dashboard,可以看到MySQL服务了。
这时候根据上面安装成功的提示,连接使用MySQL即可。
3.3 使用Dashboard来部署容器化应用
除了使用上面的命令行方式以外,容器化应用的部署过程也可以通过操作Dashboard来完成。
通过Dashboard UI来部署容器化应用有三种方式:手动编辑YAML/JSON配置文件部署、上传本地YAML/JSON配置文件部署、根据向导部署。
这里我们使用第三种,根据向导填入内容,点击部署,来部署一个nginx服务。
部署成功