kubernetes - kube-dns remains in "ContainerCreating" -
i manually installed k8s-1.6.6 , deployed calico-2.3(uses etcd-3.0.17 kube-apiserver) , kube-dns on baremetal(ubuntu 16.04).
it dosen't have problems without rbac.
but, after applying rbac adding "--authorization-mode=rbac" kube-apiserver. couldn't apply kube-dns status remains in "containercreating".
i checked "kubectl describe pod kube-dns.."
events: firstseen lastseen count subobjectpath type reason message --------- -------- ----- ---- ------------- -------- ------ ------- 10m 10m 1 default-scheduler normal scheduled assigned kube-dns-1759312207-t35t3 work01 9m 9m 1 kubelet, work01 warning failedsync error syncing pod, skipping: rpc error: code = 2 desc = error: no such container: 8c2585b1b3170f220247a6abffb1a431af58060f2bcc715fe29e7c2144d19074 8m 8m 1 kubelet, work01 warning failedsync error syncing pod, skipping: rpc error: code = 2 desc = error: no such container: c6962db6c5a17533fbee563162c499631a647604f9bffe6bc71026b09a2a0d4f 7m 7m 1 kubelet, work01 warning failedsync error syncing pod, skipping: failed "killpodsandbox" "f693931a-7335-11e7-aaa2-525400aa8825" killpodsandboxerror: "rpc error: code = 2 desc = networkplugin cni failed teardown pod \"kube-dns-1759312207-t35t3_kube-system\" network: cni failed retrieve network namespace path: error: no such container: 9adc41d07a80db44099460c6cc56612c6fbcd53176abcc3e7bbf843fca8b7532" 5m 5m 1 kubelet, work01 warning failedsync error syncing pod, skipping: rpc error: code = 2 desc = error: no such container: 4c2d450186cbec73ea28d2eb4c51497f6d8c175b92d3e61b13deeba1087e9a40 4m 4m 1 kubelet, work01 warning failedsync error syncing pod, skipping: failed "killpodsandbox" "f693931a-7335-11e7-aaa2-525400aa8825" killpodsandboxerror: "rpc error: code = 2 desc = networkplugin cni failed teardown pod \"kube-dns-1759312207-t35t3_kube-system\" network: cni failed retrieve network namespace path: error: no such container: 12df544137939d2b8af8d70964e46b49f5ddec1228da711c084ff493443df465" 3m 3m 1 kubelet, work01 warning failedsync error syncing pod, skipping: rpc error: code = 2 desc = error: no such container: c51c9d50dcd62160ffe68d891967d118a0f594885e99df3286d0c4f8f4986970 2m 2m 1 kubelet, work01 warning failedsync error syncing pod, skipping: rpc error: code = 2 desc = error: no such container: 94533f19952c7d5f32e919c03d9ec5147ef63d4c1f35dd4fcfea34306b9fbb71 1m 1m 1 kubelet, work01 warning failedsync error syncing pod, skipping: rpc error: code = 2 desc = error: no such container: 166a89916c1e6d63e80b237e5061fd657f091f3c6d430b7cee34586ba8777b37 16s 12s 2 kubelet, work01 warning failedsync (events common reason combined) 10m 2s 207 kubelet, work01 warning failedsync error syncing pod, skipping: failed "createpodsandbox" "kube-dns-1759312207-t35t3_kube-system(f693931a-7335-11e7-aaa2-525400aa8825)" createpodsandboxerror: "createpodsandbox pod \"kube-dns-1759312207-t35t3_kube-system(f693931a-7335-11e7-aaa2-525400aa8825)\" failed: rpc error: code = 2 desc = networkplugin cni failed set pod \"kube-dns-1759312207-t35t3_kube-system\" network: server not allow access requested resource (get pods kube-dns-1759312207-t35t3)" 10m 1s 210 kubelet, work01 normal sandboxchanged pod sandbox changed, killed , re-created.
my kubelet
[unit] description=kubernetes kubelet documentation=https://github.com/kubernetes/kubernetes [service] execstartpre=/bin/mkdir -p /etc/kubernetes/manifests execstartpre=/bin/mkdir -p /var/log/containers execstartpre=/bin/mkdir -p /etc/cni/net.d execstartpre=/bin/mkdir -p /opt/cni/bin execstart=/usr/local/bin/kubelet \ --api-servers=http://127.0.0.1:8080 \ --allow-privileged=true \ --pod-manifest-path=/etc/kubernetes/manifests \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --cluster-dns=10.3.0.10 \ --cluster-domain=cluster.local \ --register-node=true \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin \ --container-runtime=docker
my kube-apiserver
apiversion: v1 kind: pod metadata: name: kube-apiserver namespace: kube-system spec: hostnetwork: true containers: - name: kube-apiserver image: kube-apiserver:v1.6.6 command: - kube-apiserver - --bind-address=0.0.0.0 - --etcd-servers=http://127.0.0.1:2379 - --allow-privileged=true - --service-cluster-ip-range=10.3.0.0/16 - --secure-port=6443 - --advertise-address=172.30.1.10 - --admission-control=namespacelifecycle,limitranger,serviceaccount,resourcequota - --tls-cert-file=/srv/kubernetes/apiserver.pem - --tls-private-key-file=/srv/kubernetes/apiserver-key.pem - --client-ca-file=/srv/kubernetes/ca.pem - --service-account-key-file=/srv/kubernetes/apiserver-key.pem - --kubelet-preferred-address-types=internalip,hostname,externalip - --anonymous-auth=false - --authorization-mode=rbac - --token-auth-file=/srv/kubernetes/known_tokens.csv - --basic-auth-file=/srv/kubernetes/basic_auth.csv - --storage-backend=etcd3 livenessprobe: httpget: host: 127.0.0.1 port: 8080 path: /healthz scheme: http initialdelayseconds: 15 timeoutseconds: 15 ports: - name: https hostport: 6443 containerport: 6443 - name: local hostport: 8080 containerport: 8080 volumemounts: - name: srvkube mountpath: "/srv/kubernetes" readonly: true - name: etcssl mountpath: "/etc/ssl" readonly: true volumes: - name: srvkube hostpath: path: "/srv/kubernetes" - name: etcssl hostpath: path: "/etc/ssl"
i found cause. issue not related kube-dns. missed out applying clusterrole/clusterrolebinding, before deplying calico
Comments
Post a Comment