kubernetes学习: 8.安装flannel插件

在Kubernetes环境中,由于各节点的docker0网卡IP相同,导致节点间通信成为问题。Flannel作为解决方案之一,提供跨节点的网络通信。本文详细介绍了在CentOS7上通过yum安装flannel,配置网络选项,创建etcd中的网络配置,以及启动和检查flannel服务的过程,以确保节点间的网络互通。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

安装flannel插件


安装flannel网络插件

如果在各node节点上安装了docker服务,查看网卡信息发现各节点的docker0网卡的ip都是172.17.0.1:

[root@wecloud-test-k8s-4 ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:8e:7c:23:ea  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.99.196  netmask 255.255.255.0  broadcast 192.168.99.255
        inet6 fe80::f816:3eff:feb1:afe9  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:b1:af:e9  txqueuelen 1000  (Ethernet)
        RX packets 10815343  bytes 1108180112 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6551758  bytes 933543908 (890.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 32212  bytes 1680632 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32212  bytes 1680632 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

这引入了一个问题,各node节点之间如何通信,k8s没有直接提供多节点通信的解决方案,所以有flannel、 calico、 weave等网络解决方案,本文这里介绍以下flannel的方式。

flannel的官网地址如下:
https://coreos.com/flannel/docs/latest/


部署步骤

如果对于flannel版本没有特殊需求,可以直接在centos7上使用yum安装方式。

[root@wecloud-test-k8s-2 ~]# yum install flannel -y

flannel的service启动管理文件为/usr/lib/systemd/system/flanneld.service,内容如下:

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
  $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

该服务管理文件需要配置相关配置文件/etc/sysconfig/flanneld,配置信息如下:

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

如果是多个网卡,则需要在FLANNEL_OPTIONS上指定外网出口的网卡。


在etcd中创建网络配置

执行命令为docker分配ip地址段

[root@wecloud-test-k8s-2 ~]# etcdctl --endpoints=https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379 \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
> --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> mkdir /kube-centos/network
[root@wecloud-test-k8s-2 ~]# etcdctl --endpoints=https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379 \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
>   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
{
  
  "Network":"172.30.0.0/16","SubnetLen":24,"Backend":{
  
  "Type":"vxlan"}}

创建子网地址范围,并且指定网络类型为vxlan,但是flannel使用vxlan方式的性能比较低,所以生产环境建议使用host-gw(替换vxlan即可)

启动flannel服务

在三个node

--- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node k8s-app: flannel app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "EnableNFTables": false, "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel k8s-app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: ghcr.io/flannel-io/flannel:v0.27.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: ghcr.io/flannel-io/flannel:v0.27.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" - name: CONT_WHEN_CACHE_NOT_READY value: "false" volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate
最新发布
07-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值