Using IPVS in kube-proxy with eksctl

Posted on Mon 20 June 2022 in kubernetes, eksctl, kube-proxy • Tagged with kubernetes, eksctl, kube-proxy

I have a kubernetes cluster launched with eksctl. I can get the configuration of kube-proxy with:

kubectl edit configmap kube-proxy-config -n kube-system

I see that the default configuration uses the iptables mode. In order to change it, the mode parameter has to be changed to ipvs and the scheduler parameter in the ipvs section, which is initially empty, has to be assigned one of these policies:

  • rr: round-robin
  • lc: least connection
  • dh: destination hashing
  • sh: source hashing
  • sed: shortest expected delay
  • nq: never queue

Notice that the corresponding kernel modules must be present in the working node. You can connect with ssh to the node and check with modules are loaded with:

lsmod | grep ip_vs

In order to apply the configuration, kube-proxy has to be restarted with this command:

kubectl rollout restart ds kube-proxy -n kube-system

I get this:

ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 176128  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          163840  8 xt_conntrack,nf_nat,xt_state,xt_nat,nf_conntrack_netlink,xt_connmark,xt_MASQUERADE,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs

This means that the modules …


Continue reading

Pinning CPUs in Kubernetes using full-pcpus-only with eksctl

Posted on Mon 16 May 2022 in kubernetes, eksctl • Tagged with kubernetes, eksctl

I was trying to use the option full-pcpus-only with eksctl and I was not having luck. In the end, I was able to do it by using this cluster.yaml configuration file:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: k8s-Stokholm-Cluster
  region: eu-north-1

nodeGroups:
  - name: ng-1
    instanceType: c5.4xlarge
    desiredCapacity: 1
    ssh:
      publicKeyPath: /home/joaquin/k8s/joaquin-k8s-stockholm.pub
    kubeletExtraConfig:
      cpuManagerPolicy: static
      cpuManagerPolicyOptions:
        full-pcpus-only: "true"
      kubeReserved:
        cpu: "300m"
        memory: "300Mi"
        ephemeral-storage: "1Gi"
      kubeReservedCgroup: "/kube-reserved"
      systemReserved:
        cpu: "300m"
        memory: "300Mi"
        ephemeral-storage: "1Gi"
      featureGates:
        CPUManager: true
        CPUManagerPolicyOptions: true

When my file had not the correct options, the problem I was seeing was that eksctl got stuck with the message:

waiting for at least 1 node(s) to become ready in "ng-1"

For debugging the errors, I connected by ssh to the EC2 instance that was created and I check the logs of the kubelet service with this command:

journalctl -u kubelet.service

In order to have the CPUs pinned to a physical CPU, I had to make the requests and the limits equal (both for CPU and memory …


Continue reading