Kubernetes service doesn't load balance. Here is why 👇 By default, the kube-proxy component in Kubernetes uses iptables for routing requests. (Supports IPVS as well) We were curious about how it manages load balancing behind the scenes, and we discovered something interesting: a feature in iptables called 𝘀𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰 𝗺𝗼𝗱𝗲 𝗿𝗮𝗻𝗱𝗼𝗺 𝗽𝗿𝗼𝗯𝗮𝗯𝗶𝗹𝗶𝘁𝘆. This feature is part of iptables and is used for packet filtering and network address translation. It allows you to create rules that match a specific percentage of packets at random. For example, we tested a service endpoint pointing to a three-pod deployment. It showed statistic mode random probability as 0.33 , essentially balancing the load across the three pods. It is more of a probabilistic traffic distribution, rather than actual load balancing. - It doesn't consider the actual load on the servers. - It doesn't guarantee an even distribution of traffic over time. - It doesn't maintain session persistence You might wonder, "Is it necessary to understand Kubernetes at this level?" The answer is Yes & No 😀 In one implementation, we encountered a major routing issue. By looking at the iptables rules, we ruled out any cluster configuration issues and pinpointed the problem. We cover many topics like these in our CKA course in bonus sections. 𝗖𝗵𝗲𝗰𝗸 𝗶𝘁 𝗼𝘂𝘁: https://lnkd.in/gGbQ5vYf If you have something to add, please drop a comment. #kubernetes #devops
Great insight but also this claim is meh. You are nitpicking what "load balancing" means here and trying to redefine it. DNS load balancing is also unintelligent, but, for all intents and purposes, we still call it "load balancing". Essentially, this "probabilistic traffic distribution" (LOL) is still load balancing. What you really want to say is that the load balancing that a kubernetes service does is rather dumb and doesn't do what an F5 does, for example. And i also think the design of the k8s service is simple on purpose.
That's an interesting read
By default, the k8s-Service uses the round-robin-method for distributing traffic to the pods
Ipvs, service mesh
Definitely worth reading
Yes, absolutely, that’s what I discovered while studying iptables rules. My infrastructure is based on Proxmox, OPNsense, Kubernetes, MetalLB BGP, and Calico IPIP mode, with one VLAN for the workers and another for the masters. The Nginx ingress controller does not load balances traffic regarding the workload of the workers. Which makes sense, since it’s the Scheduler that handles moving pods from one node to another based on the required resources and those available on each node. That’s why I’m trying to configure HAProxy so that routing decisions are made based on metrics collected by Prometheus, in order to avoid redirecting traffic to nodes with more than 95% memory usage when others are below 50%.
Helpful insight
nftables for redhat distro. I guess since redhat v8. Not compatible for older linux version Or replace kube-proxy with celuim it is great option since based on ebpf
Thanks for sharing
TechLead Infrastructure | DevOps | Ex-Huawei, EPAM (OPIS/Dow Jones, SAP)
2moeven with --mode random its not real balancing, its load sharing, because its just calculates hash using src IP, src port, dst IP, dst port, protocol fields. Load balancing its when packet are balanced by some Usage values (cpu for instance) of endpoints.