Cannot start Kubernetes Dashboard



  • I'm trying to install Kubernetes Cluster with Dashboard on Ubuntu 20.04 TLS using the following commands:

    swapoff -a
    

    Remove following line from /etc/fstab

    /swap.img none swap sw 0 0

    sudo apt update
    sudo apt install docker.io
    sudo systemctl start docker
    sudo systemctl enable docker

    sudo apt install apt-transport-https curl
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
    echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
    sudo mv ~/kubernetes.list /etc/apt/sources.list.d
    sudo apt update
    sudo apt install kubeadm kubelet kubectl kubernetes-cni

    sudo kubeadm init --pod-network-cidr=192.168.0.0/16

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

    kubectl proxy --address 192.168.1.133 --accept-hosts '.*'

    But when I open http://192.168.1.133:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy

    I get:

    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {},
      "status": "Failure",
      "message": "services \"kubernetes-dashboard\" not found",
      "reason": "NotFound",
      "details": {
        "name": "kubernetes-dashboard",
        "kind": "services"
      },
      "code": 404
    }
    

    I tried to list the pods:

    root@ubuntukubernetis1:~# kubectl get pods --all-namespaces
    NAMESPACE              NAME                                         READY   STATUS              RESTARTS       AGE
    kube-flannel           kube-flannel-ds-f6bwx                        0/1     Error               11 (29s ago)   76m
    kube-system            coredns-6d4b75cb6d-rk4kq                     0/1     ContainerCreating   0              77m
    kube-system            coredns-6d4b75cb6d-vkpcm                     0/1     ContainerCreating   0              77m
    kube-system            etcd-ubuntukubernetis1                       1/1     Running             1 (52s ago)    77m
    kube-system            kube-apiserver-ubuntukubernetis1             1/1     Running             1 (52s ago)    77m
    kube-system            kube-controller-manager-ubuntukubernetis1    1/1     Running             1 (52s ago)    77m
    kube-system            kube-proxy-n6ldq                             1/1     Running             1 (52s ago)    77m
    kube-system            kube-scheduler-ubuntukubernetis1             1/1     Running             1 (52s ago)    77m
    kubernetes-dashboard   dashboard-metrics-scraper-7bfdf779ff-sdnc8   0/1     Pending             0              75m
    kubernetes-dashboard   dashboard-metrics-scraper-8c47d4b5d-2sxrb    0/1     Pending             0              59m
    kubernetes-dashboard   kubernetes-dashboard-5676d8b865-fws4j        0/1     Pending             0              59m
    kubernetes-dashboard   kubernetes-dashboard-6cdd697d84-nmpv2        0/1     Pending             0              75m
    root@ubuntukubernetis1:~#
    

    Checking kube-flannel pod logs:

    kubectl logs -n kube-flannel kube-flannel-ds-f6bwx -p
    Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
    I0724 14:49:57.782499       1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
    W0724 14:49:57.782676       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
    E0724 14:49:57.892230       1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-f6bwx': pods "kube-flannel-ds-f6bwx" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
    

    Do you know how I can fix the issue?



  • The URL you're querying ( http://192.168.1.133:8001/api/v1/namespaces/default/services/https:kubernetes-dashboard:https/proxy ) is wrong.

    According to the last yaml file you applied ( https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml ), and as we could guess from your kubectl get pods -A: the kubernetes-dashboard Service is located in the kubernetes-dashboard namespace. Not the default namespace.

    Although if you just want to connect kubernetes dashboard: instead of the kubectl proxy command you run, I would go with kubectl port-forward -n kubernetes-dashboard deploy/kubernetes-dashboard 8443:8443, then open my browser to https://localhost:8443


    Then, there's the case of your SDN. In your kubectl get pods, we can see the kube-flannel pod, in kube-flannel namespace, is in Error.

    Look at the logs for this container, and try to figure out why it does not start (kubectl logs -n kube-flannel kube-flannel-ds-xxxx -p).

    It's been years I didn't setup flannel, although I remember that in addition to applying their RBAC & daemonsets yamls, I also had to patch nodes, allocating them with a CIDR. Eg: kubectl patch node my-node-1 -p '{ "spec": { "podCIDR": "10.32.3.0/24" } }' --type merge (each podCIDR must be unique, each node would have its own range hosting Pods. If I'm not mistaken, each podCIDR must be a subset of flannel's net-conf.json Network subnet -- look at the ConfigMap created while installing flannel).

    And as of your last comment: the error tells us the following

    Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-f6bwx': pods "kube-flannel-ds-f6bwx" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
    

    Looking back at the files you've used setting up flannel, https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml , then https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml . To fix your SDN, you may want to create the following:

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: flannel-fix
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
      - kind: ServiceAccount
        name: flannel
        namespace: kube-flannel
    

    And for the record: the kube-flannel-rbac is not necessary in your case. It would be, had you installed flannel from their legacy manifest ( https://github.com/flannel-io/flannel/blob/master/Documentation/k8s-manifests/kube-flannel-legacy.yml ). In your case, that ClusterRoleBinding we're fixing should have been created properly, only applying https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml


Log in to reply
 


Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2