活动介绍

W0317 15:46:50.469157 61918 initconfiguration.go:125] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher

时间: 2025-07-05 21:50:35 浏览: 31
<think>好的,用户遇到了Kubernetes安装时的preflight检查错误,主要是关于CRI端点和bridge-nf-call-iptables的问题。我需要先分析这两个问题的原因,然后给出解决方案。 首先,CRI端点错误通常是因为kubelet无法找到正确的容器运行时接口(CRI)的socket文件。用户提到的是cri-dockerd.sock,这可能说明他们使用的是Docker作为容器运行时,但需要确认是否已经正确安装并配置了cri-dockerd服务。Docker从Kubernetes 1.24版本后不再被默认支持,所以需要安装cri-docker适配器。需要检查cri-dockerd是否安装,服务是否运行,以及kubelet的配置是否正确指向了这个socket文件。 然后是bridge-nf-call-iptables的问题。这个错误通常出现在sysctl配置中,需要检查/proc/sys/net/bridge/bridge-nf-call-iptables是否存在。如果不存在,可能是因为内核模块bridge未加载。需要加载bridge模块,并确保相关的sysctl参数被正确设置。此外,还要确认是否安装了bridge-utils或者相应的工具,以及是否应用了sysctl配置。 接下来,我需要参考用户提供的引用内容。引用[2]提到了sysctl无法找到bridge-nf-call-ip6tables,这可能与bridge模块未加载有关。引用[3]提到如果minion主机没有安装docker,启动kubelet时会报错,这可能关联到CRI的问题。虽然用户提供的引用中没有直接解决CRI的问题,但可以结合常见解决方案来回答。 针对CRI socket问题,步骤可能包括安装cri-dockerd,启动并启用服务,检查kubelet配置。对于bridge-nf的问题,可能需要加载bridge模块,设置sysctl参数,并确保iptables相关配置正确。 另外,用户可能没有意识到Docker在较新Kubernetes版本中的兼容性问题,可能需要建议他们使用containerd或其他受支持的运行时,或者正确配置cri-dockerd。 需要确保回答结构清晰,分步骤解决每个问题,并引用相关的引用内容。同时,生成的相关问题要针对用户可能进一步遇到的疑问,比如如何切换容器运行时,如何检查模块加载,以及如何验证配置是否生效。</think>### Kubernetes preflight检查错误解决方案 #### 一、CRI端点错误(criSocket unix:///run/cri-dockerd.sock) 该错误表明kubelet无法连接到容器运行时接口(CRI),可能由以下原因导致: 1. **cri-dockerd未安装或未运行** - Docker自Kubernetes 1.24起不再被默认支持,需安装cri-dockerd适配器 - 安装命令: ```bash git clone https://github.com/Mirantis/cri-dockerd.git cd cri-dockerd sudo make install sudo systemctl enable cri-docker && sudo systemctl start cri-docker ``` - 验证服务状态: ```bash systemctl status cri-docker ``` 2. **kubelet配置错误** 检查kubelet配置文件`/var/lib/kubelet/kubeconfig`是否存在,若不存在需重新初始化集群: ```bash kubeadm reset kubeadm init --cri-socket=unix:///run/cri-dockerd.sock ``` 若提示`stat /var/lib/kubelet/kubeconfig`错误(如引用[3]所示),表明kubelet初始化失败[^3] #### 二、bridge-nf-call-iptables错误 该错误与Linux内核网络配置相关,需执行以下步骤: 1. **加载br_netfilter模块** ```bash sudo modprobe br_netfilter # 永久生效 echo "br_netfilter" | sudo tee /etc/modules-load.d/br_netfilter.conf ``` 2. **配置sysctl参数** ```bash cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system ``` 3. **验证配置** ```bash ls /proc/sys/net/bridge/ # 应显示bridge-nf-call-iptables等文件 sysctl net.bridge.bridge-nf-call-iptables # 应返回1 ``` 若仍提示`No Such file or directory`(如引用[2]所示),说明模块未正确加载[^2] #### 三、补充检查 1. **防火墙配置** 确保开放Kubernetes所需端口(如引用[1]所示): ```bash sudo firewall-cmd --permanent --add-port={6443,2379-2380,10250,10259,10257}/tcp sudo firewall-cmd --reload ``` 2. **swap分区禁用** ```bash swapoff -a sed -i '/ swap / s/^/#/' /etc/fstab ```
阅读全文

相关推荐

huitian@k8s-control-1:~$ sudo kubeadm init --control-plane-endpoint "10.32.11.220:16443" --upload-certs --cri-socket=unix:///var/run/cri-dockerd.sock --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.32.11.221 --v=5 I0814 13:49:46.690237 652314 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd" I0814 13:49:46.698002 652314 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt I0814 13:49:47.707588 652314 version.go:256] remote version is much newer: v1.33.4; falling back to: stable-1.30 I0814 13:49:47.707663 652314 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.30.txt [init] Using Kubernetes version: v1.30.14 [preflight] Running pre-flight checks I0814 13:49:48.755580 652314 checks.go:561] validating Kubernetes and kubeadm version I0814 13:49:48.755646 652314 checks.go:166] validating if the firewall is enabled and active I0814 13:49:48.767478 652314 checks.go:201] validating availability of port 6443 I0814 13:49:48.767600 652314 checks.go:201] validating availability of port 10259 I0814 13:49:48.767619 652314 checks.go:201] validating availability of port 10257 I0814 13:49:48.767633 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0814 13:49:48.767646 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0814 13:49:48.767657 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0814 13:49:48.767663 652314 checks.go:278] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0814 13:49:48.767674 652314 checks.go:428] validating if the connectivity type is via proxy or direct I0814 13:49:48.767692 652314 checks.go:467] validating http connectivity to first IP address in the CIDR I0814 13:49:48.767709 652314 checks.go:467] validating http connectivity to first IP address in the CIDR I0814 13:49:48.767720 652314 checks.go:102] validating the container runtime I0814 13:49:48.798500 652314 checks.go:637] validating whether swap is enabled or not I0814 13:49:48.798630 652314 checks.go:368] validating the presence of executable crictl I0814 13:49:48.798696 652314 checks.go:368] validating the presence of executable conntrack I0814 13:49:48.798741 652314 checks.go:368] validating the presence of executable ip I0814 13:49:48.798793 652314 checks.go:368] validating the presence of executable iptables I0814 13:49:48.798838 652314 checks.go:368] validating the presence of executable mount I0814 13:49:48.798883 652314 checks.go:368] validating the presence of executable nsenter I0814 13:49:48.798925 652314 checks.go:368] validating the presence of executable ethtool I0814 13:49:48.798963 652314 checks.go:368] validating the presence of executable tc I0814 13:49:48.799002 652314 checks.go:368] validating the presence of executable touch I0814 13:49:48.799089 652314 checks.go:514] running all checks I0814 13:49:48.815214 652314 checks.go:399] checking whether the given node name is valid and reachable using net.LookupHost I0814 13:49:48.815243 652314 checks.go:603] validating kubelet version I0814 13:49:48.864298 652314 checks.go:128] validating if the "kubelet" service is enabled and active I0814 13:49:48.892228 652314 checks.go:201] validating availability of port 10250 I0814 13:49:48.892345 652314 checks.go:327] validating the contents of file /proc/sys/net/ipv4/ip_forward I0814 13:49:48.892428 652314 checks.go:201] validating availability of port 2379 I0814 13:49:48.892477 652314 checks.go:201] validating availability of port 2380 I0814 13:49:48.892525 652314 checks.go:241] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0814 13:49:48.892803 652314 checks.go:830] using image pull policy: IfNotPresent I0814 13:49:48.952669 652314 checks.go:862] image exists: registry.k8s.io/kube-apiserver:v1.30.14 I0814 13:49:48.983216 652314 checks.go:862] image exists: registry.k8s.io/kube-controller-manager:v1.30.14 I0814 13:49:49.008115 652314 checks.go:862] image exists: registry.k8s.io/kube-scheduler:v1.30.14 I0814 13:49:49.037284 652314 checks.go:862] image exists: registry.k8s.io/kube-proxy:v1.30.14 I0814 13:49:49.066411 652314 checks.go:862] image exists: registry.k8s.io/coredns/coredns:v1.11.3 I0814 13:49:49.092011 652314 checks.go:862] image exists: registry.k8s.io/pause:3.9 I0814 13:49:49.121953 652314 checks.go:862] image exists: registry.k8s.io/etcd:3.5.15-0 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0814 13:49:49.122018 652314 certs.go:112] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0814 13:49:49.253708 652314 certs.go:483] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-control-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.32.11.221 10.32.11.220] [certs] Generating "apiserver-kubelet-client" certificate and key I0814 13:49:49.585437 652314 certs.go:112] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0814 13:49:49.868021 652314 certs.go:483] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0814 13:49:49.955381 652314 certs.go:112] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0814 13:49:50.090629 652314 certs.go:483] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-control-1 localhost] and IPs [10.32.11.221 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-control-1 localhost] and IPs [10.32.11.221 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0814 13:49:50.787579 652314 certs.go:78] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0814 13:49:50.912204 652314 kubeconfig.go:112] creating kubeconfig file for admin.conf W0814 13:49:50.912354 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file I0814 13:49:51.069239 652314 kubeconfig.go:112] creating kubeconfig file for super-admin.conf W0814 13:49:51.069386 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "super-admin.conf" kubeconfig file I0814 13:49:51.312214 652314 kubeconfig.go:112] creating kubeconfig file for kubelet.conf W0814 13:49:51.312361 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file I0814 13:49:51.504372 652314 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf W0814 13:49:51.504522 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0814 13:49:51.630717 652314 kubeconfig.go:112] creating kubeconfig file for scheduler.conf W0814 13:49:51.630861 652314 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0814 13:49:51.885607 652314 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0814 13:49:51.885642 652314 manifests.go:103] [control-plane] getting StaticPodSpecs I0814 13:49:51.885763 652314 certs.go:483] validating certificate period for CA certificate I0814 13:49:51.885812 652314 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0814 13:49:51.885821 652314 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0814 13:49:51.885825 652314 manifests.go:129] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I0814 13:49:51.885828 652314 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0814 13:49:51.885833 652314 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0814 13:49:51.885839 652314 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" I0814 13:49:51.886435 652314 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0814 13:49:51.886453 652314 manifests.go:103] [control-plane] getting StaticPodSpecs I0814 13:49:51.886569 652314 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0814 13:49:51.886579 652314 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0814 13:49:51.886583 652314 manifests.go:129] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I0814 13:49:51.886589 652314 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0814 13:49:51.886593 652314 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0814 13:49:51.886596 652314 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0814 13:49:51.886602 652314 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0814 13:49:51.886605 652314 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" I0814 13:49:51.887135 652314 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I0814 13:49:51.887149 652314 manifests.go:103] [control-plane] getting StaticPodSpecs I0814 13:49:51.887264 652314 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0814 13:49:51.887560 652314 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" I0814 13:49:51.887574 652314 kubelet.go:68] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 1.001514652s [api-check] Waiting for a healthy API server. This can take up to 4m0s 超时API

[root@k8snode01-49 bin]# systemctl status kubelet # 查看运行状态 ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since 三 2025-06-25 17:07:03 CST; 7min ago Docs: https://kubernetes.io/docs/ Main PID: 15576 (kubelet) Tasks: 14 Memory: 30.0M CGroup: /system.slice/kubelet.service └─15576 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --contai... 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.611440 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.712048 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.812741 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.913020 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.024133 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.124389 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.224745 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.325352 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.426141 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.526814 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" [root@k8snode01-49 bin]# journalctl -xeu kubelet 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.325352 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.426141 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.526814 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.635436 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.736500 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.837486 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.938137 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.997608 15576 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.997657 15576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.997677 15576 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:26.997721 15576 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manage 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.039080 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.139677 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.240528 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.340706 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.441376 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.542175 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.642549 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.743424 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.844051 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.944536 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.045155 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.145533 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.245835 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.346226 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.446390 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.547045 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.547045 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.647785 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.748016 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.848328 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.949244 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.031502 15576 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotRea 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.049821 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.151914 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.253839 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.354434 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.455443 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" [root@k8snode01-49 bin]# systemctl status kubelet # 查看运行状态 ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since 三 2025-06-25 17:07:03 CST; 7min ago Docs: https://kubernetes.io/docs/ Main PID: 15576 (kubelet) Tasks: 14 Memory: 30.0M CGroup: /system.slice/kubelet.service └─15576 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --contai... 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.611440 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.712048 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.812741 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:25 k8snode01-49 kubelet[15576]: E0625 17:14:25.913020 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.024133 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.124389 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.224745 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.325352 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.426141 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.526814 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" [root@k8snode01-49 bin]# journalctl -xeu kubelet 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.325352 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.426141 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.526814 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.635436 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.736500 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.837486 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.938137 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.997608 15576 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.997657 15576 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get 6月 25 17:14:26 k8snode01-49 kubelet[15576]: E0625 17:14:26.997677 15576 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:26.997721 15576 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manage 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.039080 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.139677 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.240528 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.340706 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.441376 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.542175 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.642549 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.743424 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.844051 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:27 k8snode01-49 kubelet[15576]: E0625 17:14:27.944536 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.045155 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.145533 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.245835 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.346226 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.446390 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.547045 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.547045 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.647785 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.748016 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.848328 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:28 k8snode01-49 kubelet[15576]: E0625 17:14:28.949244 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.031502 15576 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotRea 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.049821 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.151914 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.253839 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.354434 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found" 6月 25 17:14:29 k8snode01-49 kubelet[15576]: E0625 17:14:29.455443 15576 kubelet.go:2448] "Error getting node" err="node \"k8snode01-49\" not found"

Jun 09 12:56:34 master systemd[1]: Started kubelet: The Kubernetes Node Agent. Jun 09 12:56:34 master kubelet[646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config> Jun 09 12:56:34 master kubelet[646]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image g> Jun 09 12:56:34 master kubelet[646]: I0609 12:56:34.402786 646 server.go:204] "--pod-infra-container-image will not be pruned by th> Jun 09 12:56:34 master kubelet[646]: I0609 12:56:34.739010 646 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jun 09 12:56:34 master kubelet[646]: I0609 12:56:34.739029 646 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 09 12:56:34 master kubelet[646]: I0609 12:56:34.739244 646 server.go:919] "Client rotation is on, will bootstrap in background" Jun 09 12:56:34 master kubelet[646]: I0609 12:56:34.740368 646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubel> Jun 09 12:56:34 master kubelet[646]: I0609 12:56:34.741808 646 dynamic_cafile_content.go:157] "Starting controller" name="client-ca> Jun 09 12:56:34 master kubelet[646]: E0609 12:56:34.743787 646 run.go:74] "command failed" err="failed to run Kubelet: validate ser> Jun 09 12:56:34 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 09 12:56:34 master systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 09 12:56:44 master systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. Jun 09 12:56:44 master systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.

测试master节点是双网卡,一个外网,一个内网提供服务的网络,192开头的C类网 [root@ex2 ~]# vim /etc/containerd/config.toml [root@ex2 ~]# systemctl restart contanerd Failed to restart contanerd.service: Unit contanerd.service not found. [root@ex2 ~]# systemctl restart containerd [root@ex2 ~]# kubeadm init --cri-socket=unix:///run/containerd/containerd.sock I0512 21:05:58.445742 29326 version.go:256] remote version is much newer: v1.33.0; falling back to: stable-1.29 [init] Using Kubernetes version: v1.29.15 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher [root@ex2 ~]# kubeadm init --cri-socket=unix:///run/containerd/containerd.sock --v=6 I0512 21:06:12.403320 29373 interface.go:432] Looking for default routes with IPv4 addresses I0512 21:06:12.403340 29373 interface.go:437] Default route transits interface "enp0s3" I0512 21:06:12.403401 29373 interface.go:209] Interface enp0s3 is up I0512 21:06:12.403418 29373 interface.go:257] Interface "enp0s3" has 3 addresses :[10.0.2.15/24 fd00::a00:27ff:fe39:5a4f/64 fe80::a00:27ff:fe39:5a4f/64]. I0512 21:06:12.403425 29373 interface.go:224] Checking addr 10.0.2.15/24. I0512 21:06:12.403428 29373 interface.go:231] IP found 10.0.2.15 I0512 21:06:12.403434 29373 interface.go:263] Found valid IPv4 address 10.0.2.15 for interface "enp0s3". I0512 21:06:12.403437 29373 interface.go:443] Found active IP 10.0.2.15 I0512 21:06:12.403452 29373 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd" I0512 21:06:12.412078 29373 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt I0512 21:06:15.579763 29373 version.go:256] remote version is much newer: v1.33.0; falling back to: stable-1.29 I0512 21:06:15.579786 29373 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.29.txt I0512 21:06:17.698482 29373 certs.go:519] validating certificate period for CA certificate I0512 21:06:17.698509 29373 certs.go:519] validating certificate period for front-proxy CA certificate [init] Using Kubernetes version: v1.29.15 [preflight] Running pre-flight checks I0512 21:06:17.698580 29373 checks.go:563] validating Kubernetes and kubeadm version I0512 21:06:17.698589 29373 checks.go:168] validating if the firewall is enabled and active I0512 21:06:17.704447 29373 checks.go:203] validating availability of port 6443 I0512 21:06:17.704536 29373 checks.go:203] validating availability of port 10259 I0512 21:06:17.704543 29373 checks.go:203] validating availability of port 10257 I0512 21:06:17.704551 29373 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0512 21:06:17.704564 29373 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0512 21:06:17.704570 29373 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0512 21:06:17.704575 29373 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0512 21:06:17.704580 29373 checks.go:430] validating if the connectivity type is via proxy or direct I0512 21:06:17.704589 29373 checks.go:469] validating http connectivity to first IP address in the CIDR I0512 21:06:17.704597 29373 checks.go:469] validating http connectivity to first IP address in the CIDR I0512 21:06:17.704601 29373 checks.go:104] validating the container runtime I0512 21:06:17.713489 29373 checks.go:639] validating whether swap is enabled or not I0512 21:06:17.713517 29373 checks.go:370] validating the presence of executable crictl I0512 21:06:17.713531 29373 checks.go:370] validating the presence of executable conntrack I0512 21:06:17.713537 29373 checks.go:370] validating the presence of executable ip I0512 21:06:17.713542 29373 checks.go:370] validating the presence of executable iptables I0512 21:06:17.713550 29373 checks.go:370] validating the presence of executable mount I0512 21:06:17.713556 29373 checks.go:370] validating the presence of executable nsenter I0512 21:06:17.713562 29373 checks.go:370] validating the presence of executable ethtool I0512 21:06:17.713568 29373 checks.go:370] validating the presence of executable tc I0512 21:06:17.713573 29373 checks.go:370] validating the presence of executable touch I0512 21:06:17.713579 29373 checks.go:516] running all checks I0512 21:06:17.717271 29373 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost I0512 21:06:17.717291 29373 checks.go:605] validating kubelet version I0512 21:06:17.746156 29373 checks.go:130] validating if the "kubelet" service is enabled and active I0512 21:06:17.751764 29373 checks.go:203] validating availability of port 10250 I0512 21:06:17.751797 29373 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0512 21:06:17.751812 29373 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward I0512 21:06:17.751819 29373 checks.go:203] validating availability of port 2379 I0512 21:06:17.751826 29373 checks.go:203] validating availability of port 2380 I0512 21:06:17.751832 29373 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... error execution phase preflight k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232 k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1 k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:124 github.com/spf13/cobra.(*Command).execute github.com/spf13/[email protected]/command.go:940 github.com/spf13/cobra.(*Command).ExecuteC github.com/spf13/[email protected]/command.go:1068 github.com/spf13/cobra.(*Command).Execute github.com/spf13/[email protected]/command.go:992 k8s.io/kubernetes/cmd/kubeadm/app.Run k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 main.main k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 runtime.main runtime/proc.go:272 runtime.goexit runtime/asm_amd64.s:1700

[root@0404 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since 日 2025-06-08 11:25:00 CST; 45min ago Docs: https://docs.docker.com Main PID: 13844 (dockerd) Tasks: 27 Memory: 279.0M CGroup: /system.slice/docker.service ├─13844 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/con... ├─14946 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-po... └─14952 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 27... 6月 08 12:10:39 0404 dockerd[13844]: time="2025-06-08T12:10:39.791256349+08:... 6月 08 12:10:39 0404 dockerd[13844]: time="2025-06-08T12:10:39.792029919+08:... 6月 08 12:10:42 0404 dockerd[13844]: time="2025-06-08T12:10:42.790951361+08:... 6月 08 12:10:42 0404 dockerd[13844]: time="2025-06-08T12:10:42.791697100+08:... 6月 08 12:10:43 0404 dockerd[13844]: time="2025-06-08T12:10:43.792125591+08:... 6月 08 12:10:43 0404 dockerd[13844]: time="2025-06-08T12:10:43.792293242+08:... 6月 08 12:10:46 0404 dockerd[13844]: time="2025-06-08T12:10:46.791733691+08:... 6月 08 12:10:46 0404 dockerd[13844]: time="2025-06-08T12:10:46.792178125+08:... 6月 08 12:10:47 0404 dockerd[13844]: time="2025-06-08T12:10:47.792911404+08:... 6月 08 12:10:47 0404 dockerd[13844]: time="2025-06-08T12:10:47.793449663+08:... Hint: Some lines were ellipsized, use -l to show in full. [root@0404 ~]# # 列出所有系统Pods(需容器运行时正常) [root@0404 ~]# crictl pods -n kube-system Incorrect Usage: invalid value "kube-system" for flag -n: parse error NAME: crictl pods - List pods USAGE: crictl pods command [command options] [arguments...] COMMANDS: help, h Shows a list of commands or help for one command OPTIONS: --id value filter by pod id --label value [ --label value ] filter by key=value label --last value, -n value Show last n recently created pods. Set 0 for unlimited (default: 0) --latest, -l Show the most recently created pod (default: false) --name value filter by pod name regular expression pattern --namespace value filter by pod namespace regular expression pattern --no-trunc Show output without truncating the ID (default: false) --output value, -o value Output format, One of: json|yaml|table (default: "table") --quiet, -q list only pod IDs (default: false) --state value, -s value filter by pod state --verbose, -v show verbose info for pods (default: false) --help, -h show help (default: false) FATA[0000] invalid value "kube-system" for flag -n: parse error [root@0404 ~]# crictl ps -a WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. E0608 12:12:43.766070 18318 remote_runtime.go:390] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" FATA[0000] listing containers: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"

[root@localhost opt]# ./download_k8s_offline.sh ======================================== Kubernetes 离线部署包生成脚本 (v3.1) ======================================== 🔄 配置系统仓库... Repository epel is listed more than once in the configuration Docker CE Stable 9.6 kB/s | 3.5 kB 00:00 Kubernetes 5.3 kB/s | 1.4 kB 00:00 软件包 epel-release-8-22.el8.noarch 已安装。 依赖关系解决。 无需任何处理。 完毕! 🔄 创建 yum 缓存(不使用 fast 参数)... Repository epel is listed more than once in the configuration 38 文件已删除 Repository epel is listed more than once in the configuration CentOS-8 - BaseOS - mirrors.aliyun.com 4.2 MB/s | 4.6 MB 00:01 CentOS-8 - AppStream - mirrors.aliyun.com 6.0 MB/s | 8.4 MB 00:01 CentOS-8 - Extras - mirrors.aliyun.com 23 kB/s | 10 kB 00:00 Extra Packages for Enterprise Linux 8 - x86_64 7.1 MB/s | 14 MB 00:01 Docker CE Stable 145 kB/s | 79 kB 00:00 Kubernetes 297 kB/s | 182 kB 00:00 元数据缓存已建立。 ✅ 仓库配置完成 🛠️ 安装必要工具... Repository epel is listed more than once in the configuration 上次元数据过期检查:0:00:09 前,执行于 2025年07月25日 星期五 04时51分38秒。 软件包 yum-utils-4.0.21-3.el8.noarch 已安装。 软件包 createrepo_c-0.17.2-3.el8.x86_64 已安装。 软件包 wget-1.19.5-10.el8.x86_64 已安装。 软件包 curl-7.61.1-22.el8.x86_64 已安装。 软件包 jq-1.5-12.el8.x86_64 已安装。 依赖关系解决。 无需任何处理。 完毕! ✅ 工具安装完成 🔍 检测可用 Kubernetes 版本... 📋 可用 Kubernetes 版本: 1) 1.33.3 2) 1.33.2 3) 1.33.1 4) 1.33.0 5) 1.32.7 6) 1.32.6 7) 1.32.5 8) 1.32.4 9) 1.32.3 10) 1.32.2 11) 1.28.2-0 12) 1.28.1-0 13) 1.28.0-0 14) 1.27.6-0 15) 1.27.5-0 16) 1.27.4-0 17) 1.27.3-0 18) 1.27.2-0 19) 1.27.1-0 20) 1.27.0-0 21) 1.26.9-0 22) 1.26.8-0 23) 1.26.7-0 24) 1.26.6-0 25) 1.26.5-0 26) 1.26.4-0 27) 1.26.3-0 28) 1.26.2-0 29) 1.26.1-0 30) 1.26.0-0 31) 1.25.14-0 32) 1.25.13-0 33) 1.25.12-0 34) 1.25.11-0 35) 1.25.10-0 36) 1.25.9-0 37) 1.25.8-0 38) 1.25.7-0 39) 1.25.6-0 40) 1.25.5-0 41) 1.25.4-0 42) 1.25.3-0 43) 1.25.2-0 44) 1.25.1-0 45) 1.25.0-0 46) 1.24.17-0 47) 1.24.16-0 48) 1.24.15-0 49) 1.24.14-0 50) 1.24.13-0 51) 1.24.12-0 52) 1.24.11-0 53) 1.24.10-0 54) 1.24.9-0 55) 1.24.8-0 56) 1.24.7-0 57) 1.24.6-0 58) 1.24.5-0 59) 1.24.4-0 60) 1.24.3-0 61) 1.24.2-0 62) 1.24.1-0 63) 1.24.0-0 64) 1.23.17-0 65) 1.23.16-0 66) 1.23.15-0 67) 1.23.14-0 68) 1.23.13-0 69) 1.23.12-0 70) 1.23.11-0 71) 1.23.10-0 72) 1.23.9-0 73) 1.23.8-0 74) 1.23.7-0 75) 1.23.6-0 76) 1.23.5-0 77) 1.23.4-0 78) 1.23.3-0 79) 1.23.2-0 80) 1.23.1-0 81) 1.23.0-0 82) 1.22.17-0 83) 1.22.16-0 84) 1.22.15-0 85) 1.22.14-0 86) 1.22.13-0 87) 1.22.12-0 88) 1.22.11-0 89) 1.22.10-0 90) 1.22.9-0 91) 1.22.8-0 92) 1.22.7-0 93) 1.22.6-0 94) 1.22.5-0 95) 1.22.4-0 96) 1.22.3-0 97) 1.22.2-0 98) 1.22.1-0 99) 1.22.0-0 100) 1.21.14-0 101) 1.21.13-0 102) 1.21.12-0 103) 1.21.11-0 104) 1.21.10-0 105) 1.21.9-0 106) 1.21.8-0 107) 1.21.7-0 108) 1.21.6-0 109) 1.21.5-0 110) 1.21.4-0 111) 1.21.3-0 112) 1.21.2-0 113) 1.21.1-0 114) 1.21.0-0 115) 1.20.15-0 116) 1.20.14-0 117) 1.20.13-0 118) 1.20.12-0 119) 1.20.11-0 120) 1.20.10-0 121) 1.20.9-0 122) 1.20.8-0 123) 1.20.7-0 124) 1.20.6-0 125) 1.20.5-0 126) 1.20.4-0 127) 1.20.2-0 128) 1.20.1-0 129) 1.20.0-0 130) 1.19.16-0 131) 1.19.15-0 132) 1.19.14-0 133) 1.19.13-0 134) 1.19.12-0 135) 1.19.11-0 136) 1.19.10-0 137) 1.19.9-0 138) 1.19.8-0 139) 1.19.7-0 140) 1.19.6-0 141) 1.19.5-0 142) 1.19.4-0 143) 1.19.3-0 144) 1.19.2-0 145) 1.19.1-0 146) 1.19.0-0 147) 1.18.20-0 148) 1.18.19-0 149) 1.18.18-0 150) 1.18.17-0 151) 1.18.16-0 152) 1.18.15-0 153) 1.18.14-0 154) 1.18.13-0 155) 1.18.12-0 156) 1.18.10-0 157) 1.18.9-0 158) 1.18.8-0 159) 1.18.6-0 160) 1.18.5-0 161) 1.18.4-1 162) 1.18.4-0 163) 1.18.3-0 164) 1.18.2-0 165) 1.18.1-0 166) 1.18.0-0 167) 1.17.17-0 168) 1.17.16-0 169) 1.17.15-0 170) 1.17.14-0 171) 1.17.13-0 172) 1.17.12-0 173) 1.17.11-0 174) 1.17.9-0 175) 1.17.8-0 176) 1.17.7-1 177) 1.17.7-0 178) 1.17.6-0 179) 1.17.5-0 180) 1.17.4-0 181) 1.17.3-0 182) 1.17.2-0 183) 1.17.1-0 184) 1.17.0-0 185) 1.16.15-0 186) 1.16.14-0 187) 1.16.13-0 188) 1.16.12-0 189) 1.16.11-1 190) 1.16.11-0 191) 1.16.10-0 192) 1.16.9-0 193) 1.16.8-0 194) 1.16.7-0 195) 1.16.6-0 196) 1.16.5-0 197) 1.16.4-0 198) 1.16.3-0 199) 1.16.2-0 200) 1.16.1-0 201) 1.16.0-0 202) 1.15.12-0 203) 1.15.11-0 204) 1.15.10-0 205) 1.15.9-0 206) 1.15.8-0 207) 1.15.7-0 208) 1.15.6-0 209) 1.15.5-0 210) 1.15.4-0 211) 1.15.3-0 212) 1.15.2-0 213) 1.15.1-0 214) 1.15.0-0 215) 1.14.10-0 216) 1.14.9-0 217) 1.14.8-0 218) 1.14.7-0 219) 1.14.6-0 220) 1.14.5-0 221) 1.14.4-0 222) 1.14.3-0 223) 1.14.2-0 224) 1.14.1-0 225) 1.14.0-0 226) 1.13.12-0 227) 1.13.11-0 228) 1.13.10-0 229) 1.13.9-0 230) 1.13.8-0 231) 1.13.7-0 232) 1.13.6-0 233) 1.13.5-0 234) 1.13.4-0 235) 1.13.3-0 236) 1.13.2-0 237) 1.13.1-0 238) 1.13.0-0 239) 1.12.10-0 240) 1.12.9-0 241) 1.12.8-0 242) 1.12.7-0 243) 1.12.6-0 244) 1.12.5-0 245) 1.12.4-0 246) 1.12.3-0 247) 1.12.2-0 248) 1.12.1-0 249) 1.12.0-0 250) 1.11.10-0 251) 1.11.9-0 252) 1.11.8-0 253) 1.11.7-0 254) 1.11.6-0 255) 1.11.5-0 256) 1.11.4-0 257) 1.11.3-0 258) 1.11.2-0 259) 1.11.1-0 260) 1.11.0-0 261) 1.10.13-0 262) 1.10.12-0 263) 1.10.11-0 264) 1.10.10-0 265) 1.10.9-0 266) 1.10.8-0 267) 1.10.7-0 268) 1.10.6-0 269) 1.10.5-0 270) 1.10.4-0 271) 1.10.3-0 272) 1.10.2-0 273) 1.10.1-0 274) 1.10.0-0 275) 1.9.11-0 276) 1.9.10-0 277) 1.9.9-0 278) 1.9.8-0 279) 1.9.7-0 280) 1.9.6-0 281) 1.9.5-0 282) 1.9.4-0 283) 1.9.3-0 284) 1.9.2-0 285) 1.9.1-0 286) 1.9.0-0 287) 1.8.15-0 288) 1.8.14-0 289) 1.8.13-0 290) 1.8.12-0 291) 1.8.11-0 292) 1.8.10-0 293) 1.8.9-0 294) 1.8.8-0 295) 1.8.7-0 296) 1.8.6-0 297) 1.8.5-1 298) 1.8.5-0 299) 1.8.4-1 300) 1.8.4-0 301) 1.8.3-1 302) 1.8.3-0 303) 1.8.2-1 304) 1.8.2-0 305) 1.8.1-1 306) 1.8.1-0 307) 1.8.0-1 308) 1.8.0-0 309) 1.7.16-0 310) 1.7.15-0 311) 1.7.14-0 312) 1.7.11-1 313) 1.7.11-0 314) 1.7.10-1 315) 1.7.10-0 316) 1.7.9-1 317) 1.7.9-0 318) 1.7.8-2 319) 1.7.8-1 320) 1.7.7-2 321) 1.7.7-1 322) 1.7.6-2 323) 1.7.6-1 324) 1.7.5-1 325) 1.7.5-0 326) 1.7.4-1 327) 1.7.4-0 328) 1.7.3-2 329) 1.7.3-1 330) 1.7.2-1 331) 1.7.2-0 332) 1.7.1-1 333) 1.7.1-0 334) 1.7.0-1 335) 1.7.0-0 336) 1.6.13-1 337) 1.6.13-0 338) 1.6.12-1 339) 1.6.12-0 340) 1.6.11-1 341) 1.6.11-0 342) 1.6.10-1 343) 1.6.10-0 344) 1.6.9-1 345) 1.6.9-0 346) 1.6.8-1 347) 1.6.8-0 348) 1.6.7-1 349) 1.6.7-0 350) 1.6.6-1 351) 1.6.6-0 352) 1.6.5-1 353) 1.6.5-0 354) 1.6.4-1 355) 1.6.4-0 356) 1.6.3-1 357) 1.6.3-0 358) 1.6.2-1 359) 1.6.2-0 360) 1.6.1-1 361) 1.6.1-0 362) 1.6.0-1 363) 1.6.0-0 364) 1.5.4-1 365) 1.5.4-0 👉 请选择版本号 [1-365, 默认1]: 11 ✅ 已选择版本: Kubernetes v1.28.2 📦 下载Kubernetes组件... ⬇️ 下载: kubelet-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubelet-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubelet-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubelet-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubelet Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubelet-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubelet-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubelet-1.28.2-0.rpm ⬇️ 下载: kubeadm-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubeadm-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubeadm-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubeadm-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubeadm Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubeadm-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubeadm-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubeadm-1.28.2-0.rpm ⬇️ 下载: kubectl-1.28.2-0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/kubectl-1.28.2-0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/kubectl-1.28.2-0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/kubectl-1.28.2-0.x86_64.rpm 🔄 使用yumdownloader下载: kubectl Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/kubectl-1.28.2-0.rpm' 与'/opt/k8s-offline/packages/kubectl-1.28.2-0.rpm' 为同一文件 ✅ yumdownloader下载成功: kubectl-1.28.2-0.rpm ⬇️ 下载: cri-tools-1.26.0 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages/cri-tools-1.26.0.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/cri-tools-1.26.0.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el7.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el7.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.el8.x86_64.rpm 🔗 尝试: https://dl.k8s.io/release/cri-tools-1.26.0.x86_64.rpm 🔄 使用yumdownloader下载: cri-tools Repository epel is listed more than once in the configuration 错误:未知仓库:'k8s-*' mv: '/opt/k8s-offline/packages/cri-tools-1.26.0.rpm' 与'/opt/k8s-offline/packages/cri-tools-1.26.0.rpm' 为同一文件 ✅ yumdownloader下载成功: cri-tools-1.26.0.rpm ✅ Kubernetes组件下载完成 🐳 下载Docker组件... 根据日志重新生成所有脚本

最新推荐

recommend-type

基于QT的调色板

【基于QT的调色板】是一个使用Qt框架开发的色彩选择工具,类似于Windows操作系统中常见的颜色选取器。Qt是一个跨平台的应用程序开发框架,广泛应用于桌面、移动和嵌入式设备,支持C++和QML语言。这个调色板功能提供了横竖两种渐变模式,用户可以方便地选取所需的颜色值。 在Qt中,调色板(QPalette)是一个关键的类,用于管理应用程序的视觉样式。QPalette包含了一系列的颜色角色,如背景色、前景色、文本色、高亮色等,这些颜色可以根据用户的系统设置或应用程序的需求进行定制。通过自定义QPalette,开发者可以创建具有独特视觉风格的应用程序。 该调色板功能可能使用了QColorDialog,这是一个标准的Qt对话框,允许用户选择颜色。QColorDialog提供了一种简单的方式来获取用户的颜色选择,通常包括一个调色板界面,用户可以通过滑动或点击来选择RGB、HSV或其他色彩模型中的颜色。 横渐变取色可能通过QGradient实现,QGradient允许开发者创建线性或径向的色彩渐变。线性渐变(QLinearGradient)沿直线从一个点到另一个点过渡颜色,而径向渐变(QRadialGradient)则以圆心为中心向外扩散颜色。在调色板中,用户可能可以通过滑动条或鼠标拖动来改变渐变的位置,从而选取不同位置的颜色。 竖渐变取色则可能是通过调整QGradient的方向来实现的,将原本水平的渐变方向改为垂直。这种设计可以提供另一种方式来探索颜色空间,使得选取颜色更为直观和便捷。 在【colorpanelhsb】这个文件名中,我们可以推测这是与HSB(色相、饱和度、亮度)色彩模型相关的代码或资源。HSB模型是另一种常见且直观的颜色表示方式,与RGB或CMYK模型不同,它以人的感知为基础,更容易理解。在这个调色板中,用户可能可以通过调整H、S、B三个参数来选取所需的颜色。 基于QT的调色板是一个利用Qt框架和其提供的色彩管理工具,如QPalette、QColorDialog、QGradient等,构建的交互式颜色选择组件。它不仅提供了横竖渐变的色彩选取方式,还可能支持HSB色彩模型,使得用户在开发图形用户界面时能更加灵活和精准地控制色彩。
recommend-type

美国国际航空交通数据分析报告(1990-2020)

根据给定的信息,我们可以从中提取和分析以下知识点: 1. 数据集概述: 该数据集名为“U.S. International Air Traffic data(1990-2020)”,记录了美国与国际间航空客运和货运的详细统计信息。数据集涵盖的时间范围从1990年至2020年,这说明它包含了长达30年的时间序列数据,对于进行长期趋势分析非常有价值。 2. 数据来源及意义: 此数据来源于《美国国际航空客运和货运统计报告》,该报告是美国运输部(USDOT)所管理的T-100计划的一部分。T-100计划旨在收集和发布美国和国际航空公司在美国机场的出入境交通报告,这表明数据的权威性和可靠性较高,适用于政府、企业和学术研究等领域。 3. 数据内容及应用: 数据集包含两个主要的CSV文件,分别是“International_Report_Departures.csv”和“International_Report_Passengers.csv”。 a. International_Report_Departures.csv文件可能包含了以下内容: - 离港航班信息:记录了各航空公司的航班号、起飞和到达时间、起飞和到达机场的代码以及国际地区等信息。 - 航空公司信息:可能包括航空公司代码、名称以及所属国家等。 - 飞机机型信息:如飞机类型、座位容量等,这有助于分析不同机型的使用频率和趋势。 - 航线信息:包括航线的起始和目的国家及城市,对于研究航线网络和优化航班计划具有参考价值。 这些数据可以用于航空交通流量分析、机场运营效率评估、航空市场分析等。 b. International_Report_Passengers.csv文件可能包含了以下内容: - 航班乘客信息:可能包括乘客的国籍、年龄、性别等信息。 - 航班类型:如全客机、全货机或混合型航班,可以分析乘客运输和货物运输的比例。 - 乘客数量:记录了各航班或航线的乘客数量,对于分析航空市场容量和增长趋势很有帮助。 - 飞行里程信息:有助于了解国际间不同航线的长度和飞行距离,为票价设置和燃油成本分析提供数据支持。 这些数据可以用于航空客运市场分析、需求预测、收益管理等方面。 4. 数据分析和应用实例: - 航空流量分析:通过分析离港航班数据,可以观察到哪些航线最为繁忙,哪些机场的国际航空流量最大,这有助于航空公司调整航班时刻表和运力分配。 - 市场研究:乘客数据可以揭示不同国家和地区之间的人口流动趋势,帮助航空公司和政府机构了解国际旅行市场的需求变化。 - 飞机利用率:结合飞机机型和飞行频率信息,可以对特定机型的使用率进行分析,评估飞机维护需求和燃油效率。 - 安全监管:通过对比不同航空公司和航班的安全记录,监管机构可以更有效地评估航空公司的安全性能,并采取必要的监管措施。 5. 技术和方法论: 分析此类数据通常涉及数据清洗、数据整合、统计分析、时间序列分析、预测建模等数据科学方法。使用Excel、SQL、R、Python等工具进行数据处理和分析是常见的做法。例如,可以使用Python的Pandas库来清洗和准备数据,使用Matplotlib和Seaborn库来可视化数据,然后利用Scikit-learn或Statsmodels库来构建预测模型。 通过以上知识点的提取和分析,我们可以理解到“U.S. International Air Traffic data(1990-2020)-数据集”的重要性,它不仅记录了跨越30年的航空交通数据,还为各种分析和应用提供了详实的基础信息。对于航空业从业者、政策制定者、研究人员以及数据分析师来说,这是一个极具价值的数据资源。
recommend-type

统计学视角:深入理解最小二乘法的概率论基础

# 1. 最小二乘法的基本概念 最小二乘法(Least Squares Method, LSM)是统计学和数据分析中广泛使用的一种数学优化技术。其主要目的是通过最小化误差的平方和来寻找数据的最佳函数匹配。这种方法最早由高斯提出,但在工程、物理、经济和其他领域中有着广泛的应用。 在本章中,我们将首先了
recommend-type

vscode中使用Codeium

<think>好的,我需要回答用户如何在VSCode中使用Codeium插件的问题。首先,我需要回顾用户提供的引用内容,看看是否有相关信息。用户提供的引用[1]提到了Codeium Chat可以解释代码中的问题,引用[2]提到了在IDEA中的一些问题,可能对VSCode的安装有帮助。用户还提到了安装步骤、基本功能、常见问题等。 首先,我应该分步骤说明安装过程,包括在VSCode扩展商店搜索Codeium并安装。然后,登录部分可能需要用户访问仪表板获取API密钥,引用[2]中提到登录问题,可能需要提醒用户注意网络或权限设置。 接下来是基本功能,比如代码自动补全和Chat功能。引用[1]提到C
recommend-type

UniMoCo:统一框架下的多监督视觉学习方法

在详细解析“unimoco”这个概念之前,我们需要明确几个关键点。首先,“unimoco”代表的是一种视觉表示学习方法,它在机器学习尤其是深度学习领域中扮演着重要角色。其次,文章作者通过这篇论文介绍了UniMoCo的全称,即“Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning”,其背后的含义是在于UniMoCo框架整合了无监督学习、半监督学习和全监督学习三种不同的学习策略。最后,该框架被官方用PyTorch库实现,并被提供给了研究者和开发者社区。 ### 1. 对比学习(Contrastive Learning) UniMoCo的概念根植于对比学习的思想,这是一种无监督学习的范式。对比学习的核心在于让模型学会区分不同的样本,通过将相似的样本拉近,将不相似的样本推远,从而学习到有效的数据表示。对比学习与传统的分类任务最大的不同在于不需要手动标注的标签来指导学习过程,取而代之的是从数据自身结构中挖掘信息。 ### 2. MoCo(Momentum Contrast) UniMoCo的实现基于MoCo框架,MoCo是一种基于队列(queue)的对比学习方法,它在训练过程中维持一个动态的队列,其中包含了成对的负样本。MoCo通过 Momentum Encoder(动量编码器)和一个队列来保持稳定和历史性的负样本信息,使得模型能够持续地进行对比学习,即使是在没有足够负样本的情况下。 ### 3. 无监督学习(Unsupervised Learning) 在无监督学习场景中,数据样本没有被标记任何类别或标签,算法需自行发现数据中的模式和结构。UniMoCo框架中,无监督学习的关键在于使用没有标签的数据进行训练,其目的是让模型学习到数据的基础特征表示,这对于那些标注资源稀缺的领域具有重要意义。 ### 4. 半监督学习(Semi-Supervised Learning) 半监督学习结合了无监督和有监督学习的优势,它使用少量的标注数据与大量的未标注数据进行训练。UniMoCo中实现半监督学习的方式,可能是通过将已标注的数据作为对比学习的一部分,以此来指导模型学习到更精准的特征表示。这对于那些拥有少量标注数据的场景尤为有用。 ### 5. 全监督学习(Full-Supervised Learning) 在全监督学习中,所有的训练样本都有相应的标签,这种学习方式的目的是让模型学习到映射关系,从输入到输出。在UniMoCo中,全监督学习用于训练阶段,让模型在有明确指示的学习目标下进行优化,学习到的任务相关的特征表示。这通常用于有充足标注数据的场景,比如图像分类任务。 ### 6. PyTorch PyTorch是一个开源机器学习库,由Facebook的人工智能研究团队开发,主要用于计算机视觉和自然语言处理等任务。它被广泛用于研究和生产环境,并且因其易用性、灵活性和动态计算图等特性受到研究人员的青睐。UniMoCo官方实现选择PyTorch作为开发平台,说明了其对科研社区的支持和对易于实现的重视。 ### 7. 可视化表示学习(Visual Representation Learning) 可视化表示学习的目的是从原始视觉数据中提取特征,并将它们转换为能够反映重要信息且更易于处理的形式。在UniMoCo中,无论是无监督、半监督还是全监督学习,最终的目标都是让模型学习到有效的视觉表示,这些表示可以用于下游任务,如图像分类、目标检测、图像分割等。 ### 8. 标签队列(Label Queue) UniMoCo通过标签队列维护受监管的标签,这可能意味着对于那些半监督或全监督学习的任务,模型在进行对比学习时,会参考这些来自标签队列的数据。标签队列机制能帮助模型更好地利用有限的标注数据,增强模型的泛化能力。 ### 结论 UniMoCo的提出,以及其官方PyTorch实现的发布,将对计算机视觉领域产生深远影响。它不仅提供了一个统一的对比学习框架,使得从无监督到全监督的学习过程更加灵活和高效,而且为研究者们提供了一个强力的工具,以便更好地探索和实现各种视觉任务。UniMoCo的研究和应用前景,为机器学习尤其是深度学习在视觉领域的研究和实践提供了新的视角和可能。
recommend-type

【MATLAB算法精讲】:最小二乘法的实现与案例深度分析

# 1. 最小二乘法的基本原理 最小二乘法是一种数学优化技术,它通过最小化误差的平方和来寻找数据的最佳函数匹配。其核心思想是选择一条曲线,使得所有观察点到这条曲线的距离之和最小。这种方法广泛应用于统计学、信号处理、工程学和经济学等领域,尤其适用于需要通过一组数据点来确定函数参数的情况。 ## 1.1 统计学视角下的最小二乘法 在统计学中,最小二乘法经常用于
recommend-type

Idea使用教程+jdk配置

<think>好的,我现在需要回答用户关于IntelliJ IDEA使用教程和JDK配置方法的问题。首先,用户明确提到了参考站内引用中的内容,引用[1]和引用[2]都涉及JDK的配置步骤,但看起来这些引用内容可能来自其他问题或文档,可能与当前问题不完全相关,但用户希望我利用这些信息来组织回答。 首先,我需要确认IntelliJ IDEA配置JDK的基本步骤,并整合用户提供的引用内容。引用[1]提到选择JDK安装根目录,例如D:\develop\Java\jdk-17,这说明配置时需要定位到JDK的主目录。引用[2]则提到了通过New按钮选择JDK版本,并完成项目创建,这部分可能涉及到项目设置
recommend-type

GitHub入门实践:审查拉取请求指南

从提供的文件信息中,我们可以抽取以下知识点: **GitHub入门与Pull Request(PR)的审查** **知识点1:GitHub简介** GitHub是一个基于Git的在线代码托管和版本控制平台,它允许开发者在互联网上进行代码的托管和协作。通过GitHub,用户可以跟踪和管理代码变更,参与开源项目,或者创建自己的私有仓库进行项目协作。GitHub为每个项目提供了问题跟踪和任务管理功能,支持Pull Request机制,以便用户之间可以进行代码的审查和讨论。 **知识点2:Pull Request的作用与审查** Pull Request(PR)是协作开发中的一个重要机制,它允许开发者向代码库贡献代码。当开发者在自己的分支上完成开发后,他们可以向主分支(或其他分支)提交一个PR,请求合入他们的更改。此时,其他开发者,包括项目的维护者,可以审查PR中的代码变更,进行讨论,并最终决定是否合并这些变更到目标分支。 **知识点3:审查Pull Request的步骤** 1. 访问GitHub仓库,并查看“Pull requests”标签下的PR列表。 2. 选择一个PR进行审查,点击进入查看详细内容。 3. 查看PR的标题、描述以及涉及的文件变更。 4. 浏览代码的具体差异,可以逐行审查,也可以查看代码变更的概览。 5. 在PR页面添加评论,可以针对整个PR,也可以针对特定的代码行或文件。 6. 当审查完成后,可以提交评论,或者批准、请求修改或关闭PR。 **知识点4:代码审查的最佳实践** 1. 确保PR的目标清晰且具有针对性,避免过于宽泛。 2. 在审查代码时,注意代码的质量、结构以及是否符合项目的编码规范。 3. 提供建设性的反馈,指出代码的优点和需要改进的地方。 4. 使用清晰、具体的语言,避免模糊和主观的评论。 5. 鼓励开发者间的协作,而不是单向的批评。 6. 经常审查PR,以避免延迟和工作积压。 **知识点5:HTML基础** HTML(HyperText Markup Language)是用于创建网页的标准标记语言。它通过各种标签(如`<p>`用于段落,`<img>`用于图片,`<a>`用于链接等)来定义网页的结构和内容。HTML文档由元素组成,这些元素通过开始标签和结束标签来标识。例如,`<p>This is a paragraph.</p>`。HTML的最新版本是HTML5,它引入了许多新的元素和API,增强了对多媒体、图形和本地存储的支持。 **知识点6:GitHub Pages功能介绍** GitHub Pages是一个静态站点托管服务,允许用户直接从GitHub仓库中发布个人、组织或项目的网站。你可以通过设置一个专门的分支来存放你的网站源代码,然后利用GitHub Pages的设置选项,选择分支并发布你的网站。发布的网站将可以通过一个自定义的URL访问,这个URL通常是`username.github.io/repo-name`的格式。这为开发者提供了一个简单而快速的方法来搭建个人或项目的展示页面。 **知识点7:简单的游戏开发实践** 文件描述中提到了一个基于项目的学习活动,通过游戏的形式,让入门人员体验操作和理解基本的游戏开发概念。在这个活动中,参与者通过键盘操作控制形状的旋转和移动,目标是创建无空隙的完整行,这涉及到游戏逻辑、用户输入处理和图形界面显示等基础知识。该活动可能使用了HTML、JavaScript和CSS等前端技术实现,参与者通过实践操作来学习如何编程,并理解基本的游戏设计原理。
recommend-type

【R语言高级教程】:最小二乘法从入门到精通

# 1. 最小二乘法的理论基础 最小二乘法是数学优化技术之一,广泛应用于统计学、数据分析和工程学等领域。其核心思想在于找到一条直线(或曲线),使得所有观测点到该直线的垂直距离之和最小,即误差平方和最小。这一方法不仅简单易行,而且可以有效减少异常值对模型的影响,使得参数估计更加稳定。 ## 1.1 最小二乘法的数学表述 在最基本的线性回归模型中,最小二乘法通过最小化误差的平方和来寻找最
recommend-type

cadence画PCB时改变线长

<think>好的,我现在需要帮助用户解决在Cadence PCB设计中调整走线长度的问题。首先,我得回想一下用户提到的引用内容,尤其是引用1、2和3,里面提到了实时显示走线长度、约束导向设计以及过孔对长度的影响。用户可能已经知道如何显示长度,现在想知道如何调整它们。 用户的问题是如何调整走线长度以满足特定需求,比如等长布线或时序要求。我应该先分步骤说明不同的调整方法,比如手动调整、使用自动优化工具、蛇形走线,以及设置约束管理器中的规则。 记得引用中的信息,比如引用2提到的约束导向环境和实时长度显示,所以需要提到约束管理器的使用。引用3讨论了过孔对长度的影响,调整过孔数量可能也是一种方法。