forward http request to pod running on worker node



  • Context: I am trying to setup a kubernetes cluster on my PC using virtual box. Here's the setup - In this setup i am able to launch pods from the control plane, as well as able to send http requests to pods enter image description here

    here CP01: master/control plane, W01 - worker1, W02 - worker2 node.

    I had initialized control plane using -

    master] kubeadm init --apiserver-advertise-address 10.5.5.1 --pod-network-cidr 10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock
    worker] kubeadm join 10.5.5.1:6443  --token jv5pxe.t07snw8ewrbejn6i   --cri-socket  unix:///var/run/cri-dockerd.sock      --discovery-token-ca-cert-hash sha256:10fc6e3fdc2085085f1ea1a75c9eb4316f13759b0d3773377db86baa30d8b972
    

    I am able to create deployment as -

    [root@cp01 ~]# cat run.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.14.2
            ports:
            - containerPort: 80
    

    Here's the load balancer -

    [root@cp01 ~]# cat serv.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-world
    spec:
      type: LoadBalancer
      selector:
        app: nginx
      ports:
        - name: http
          protocol: TCP
          port: 80
          targetPort: 80
    

    and from the cp01 node , i am able to hit both the pods via load balancer

    [root@cp01 ~]# kubectl get services
    NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
    hello-world   LoadBalancer   10.96.81.128        80:31785/TCP   3m56s
    [root@cp01 ~]# curl 10.96.81.128|grep Welcome
       Welcome to nginx!
       

    Welcome to nginx!

    I have following 2 Queries -

    Q1: What settings to i need to carry out to make the flannel work without enp0s3 interface? I don't need external connectivity on w01 and w02. So if i disable the enp0s3 on w01 and w02, the flannel pods on worker starts failing. Here's is how i tried reproducing issue on w01 -

    [root@cp01 ~]# kubectl get pods -A
    NAMESPACE      NAME                           READY   STATUS    RESTARTS        AGE
    kube-flannel   kube-flannel-ds-mp8zs          1/1     Running   8 (31m ago)     41m
    kube-flannel   kube-flannel-ds-p5kwj          1/1     Running   2 (12h ago)     3d1h
    kube-flannel   kube-flannel-ds-wqpwl          1/1     Running   0               24m
    kube-system    coredns-565d847f94-xddkq       1/1     Running   1 (12h ago)     15h
    kube-system    coredns-565d847f94-xl7pj       1/1     Running   1 (12h ago)     15h
    kube-system    etcd-cp01                      1/1     Running   2 (12h ago)     3d1h
    kube-system    kube-apiserver-cp01            1/1     Running   2 (12h ago)     3d1h
    kube-system    kube-controller-manager-cp01   1/1     Running   2 (12h ago)     3d1h
    kube-system    kube-proxy-9f4xm               1/1     Running   2 (12h ago)     3d1h
    kube-system    kube-proxy-dhhqc               1/1     Running   2 (12h ago)     3d1h
    kube-system    kube-proxy-w64gc               1/1     Running   1 (2d16h ago)   3d1h
    kube-system    kube-scheduler-cp01            1/1     Running   2 (12h ago)     3d1h
    [root@cp01 ~]# ssh w01 'nmcli con down enp0s3'
    [root@cp01 ~]# kubectl delete pod -n kube-flannel kube-flannel-ds-mp8zs
    pod "kube-flannel-ds-mp8zs" deleted
    [root@cp01 ~]# kubectl delete pod -n kube-flannel kube-flannel-ds-wqpwl
    pod "kube-flannel-ds-wqpwl" deleted
    [root@cp01 ~]# kubectl get pods -A
    NAMESPACE      NAME                           READY   STATUS             RESTARTS        AGE
    kube-flannel   kube-flannel-ds-2kqq5          0/1     CrashLoopBackOff   2 (25s ago)     45s
    kube-flannel   kube-flannel-ds-kcwk6          1/1     Running            0               49s
    kube-flannel   kube-flannel-ds-p5kwj          1/1     Running            2 (12h ago)     3d1h
    kube-system    coredns-565d847f94-xddkq       1/1     Running            1 (12h ago)     15h
    kube-system    coredns-565d847f94-xl7pj       1/1     Running            1 (12h ago)     15h
    kube-system    etcd-cp01                      1/1     Running            2 (12h ago)     3d1h
    kube-system    kube-apiserver-cp01            1/1     Running            2 (12h ago)     3d1h
    kube-system    kube-controller-manager-cp01   1/1     Running            2 (12h ago)     3d1h
    kube-system    kube-proxy-9f4xm               1/1     Running            2 (12h ago)     3d1h
    kube-system    kube-proxy-dhhqc               1/1     Running            2 (12h ago)     3d1h
    kube-system    kube-proxy-w64gc               1/1     Running            1 (2d16h ago)   3d1h
    kube-system    kube-scheduler-cp01            1/1     Running            2 (12h ago)     3d1h
    

    here's the reason -

    [root@cp01 ~]# kubectl logs -n kube-flannel kube-flannel-ds-2kqq5
    Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
    I1005 08:17:59.211331       1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
    W1005 08:17:59.211537       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    E1005 08:17:59.213916       1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-2kqq5': Get "https://10.96.0.1:443/api/v1/namespaces/kube-flannel/pods/kube-flannel-ds-2kqq5": dial tcp 10.96.0.1:443: connect: network is unreachable
    

    Q2: I plan to send http request to the load balancer via enp0s3 interface on cp01 node. do i need to :

    • reset the cluster and do the kube init again using 0.0.0.0 ip or,
    • is there a way to accomplish it without disturbing existing configurations/setup (using ingress?) ?

    Please advice. I have started learning kubernetes recently, so please excuse if i missed out on some basic concepts of the kubernetes world while framing these queries. Please do let me know if something is unclear.

    UPDATE:

    Q2: i tried initializing kubeadm node via -

    [root@cp01 ~]# kubeadm init --apiserver-advertise-address 0.0.0.0  --pod-network-cidr 10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock
    

    but with this method, worker nodes are unable to join cluster from 10.5.5.0/24 (enp0s8) network. they are able to join cluster as -

    kubeadm join 192.168.29.73:6443 --cri-socket  unix:///var/run/cri-dockerd.sock   --token 6srhn0.1lyiffiml08qcnfw --discovery-token-ca-cert-hash sha256:ab87e3e04da65176725776c08e0f924bbc07b26d0f8e2501793067e477ab6379
    


  • Kubelet would usually bind to whichever interface holds your default gateway. I suspect your SDN pods are failing to start up due to this. At that stage, I would check for netstat, ip routes and iptables rules.

    You may want to remove the internal network thing. Deploying Kubernetes, it is recommended for your nodes to run in a LAN, using loadbalancers to expose your ingresses.

    Otherwise, you need your default gateway to point to your internal network. And then, you'll need some kind of router, NAT-ing traffic while sending it back to your bridged network.


Log in to reply
 


Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2