社区为Kubernetes clusters提供了一个前端UI项目,叫做dashboard,具体地址:https://github.com/kubernetes/dashboard
是一个Kubernetes用户界面,还可以在UI上部署容器到集群,以及查询管理等
部署方式也很简单
[root@2020 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
查看一下部署的情况
[root@2020 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9d85f5447-p9jbh 1/1 Running 0 22h kube-system coredns-9d85f5447-tvnd2 1/1 Running 0 22h kube-system etcd-2020 1/1 Running 0 22h kube-system kube-apiserver-2020 1/1 Running 0 22h kube-system kube-controller-manager-2020 1/1 Running 0 22h kube-system kube-proxy-4bs8c 1/1 Running 0 22h kube-system kube-proxy-gf478 1/1 Running 4 10h kube-system kube-scheduler-2020 1/1 Running 0 22h kube-system weave-net-jcn82 2/2 Running 0 21h kube-system weave-net-zmncl 2/2 Running 12 10h kubernetes-dashboard dashboard-metrics-scraper-566cddb686-6q52z 1/1 Running 0 14m kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-vzrr2 1/1 Running 0 14m
要想浏览UI,可以后端执行命令
[root@2020 ~]# kubectl proxy Starting to serve on 127.0.0.1:8001
然后在执行proxy命令的机器上访问WEB,首先必须获取token
[root@2020 ~]# kubectl -n kube-system describe secret/namespace-controller-token-z64nn | grep token: | awk '{print $2}'
访问UI
假如想要从非本地网络访问,比较麻烦
[root@2020 ~]# kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' Starting to serve on [::]:8001
还是无法访问执行proxy命令机器上的dashboard服务
看来只能本地才能用HTTP服务访问
这就尴尬了,再来看看其他方式,有没有什么办法
在所有pod里,发现了api-server的身影
[root@2020 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-9d85f5447-p9jbh 1/1 Running 0 23h coredns-9d85f5447-tvnd2 1/1 Running 0 23h etcd-2020 1/1 Running 0 23h kube-apiserver-2020 1/1 Running 0 23h kube-controller-manager-2020 1/1 Running 0 23h kube-proxy-4bs8c 1/1 Running 0 23h kube-proxy-gf478 1/1 Running 4 11h kube-scheduler-2020 1/1 Running 0 23h weave-net-jcn82 2/2 Running 0 22h weave-net-zmncl 2/2 Running 12 11h
按理说,api-server应该是暴露给外面的,应该有办法访问
再来看看master节点部署的时候的绑定的api端口,是6443
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.247.132 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: "2020" taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.17.0 networking: dnsDomain: cluster.local serviceSubnet: 172.17.0.0/16 scheduler: {}
返回了一个JSON信息
{
kind: “Status”,
apiVersion: “v1”,
metadata: { },
status: “Failure”,
message: “services “https:kubernetes-dashboard:” is forbidden: User “system:anonymous” cannot get resource “services/proxy” in API group “” in the namespace “kube-system“”,
reason: “Forbidden”,
details: {
name: “https:kubernetes-dashboard:”,
kind: “services”
},
code: 403
}
看来是身份的问题
我用的是最新版本的Kubernetes,默认开启了RBAC,没有认证的用户的身份都是anonymous,而如果是API Server,是用证书认证的,因此需要生成client-certificate-data,client-key-data和p12
首先是crt
[root@2020 ~]# grep "client-certificate-data" /etc/kubernetes/admin.conf | awk '{print $2}' | base64 -d -----BEGIN CERTIFICATE----- MIIC8jCCAdqgAwIBAgIIPSmAXZxZPngwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE AxMKa3ViZXJuZXRlczAeFw0yMDAxMDgxNTE5NTRaFw0yMTAxMDcxNTE5NTZaMDQx FzAVBgNVBAoTDnN5c3RlbTptYXN0ZXJzMRkwFwYDVQQDExBrdWJlcm5ldGVzLWFk bWluMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqqOGvimh/QKsCtZJ 2UHIVji/bmR4oN2r3Cvq7uS8UtQsqc8O972i/pjpkg8kUuaiK9Ef2/Ygvtinf8Yi VkTFTjZigzWnTwKpY1HELHWVfRx//LMKvJX6R86QYNYD735YCHp4xAKy0Ll2gQ2V 54T/aLxzccPlp4nLjKHybcH+Rs+6v/aHu7woxSooo/eI/yBTE3LkzVVHm7PcJYwR EizjB4xDbxHfgljZjrbFiSZoQGTCIesROyGDhIJlmKdapy/UMrYn3UvXF86eR+Mg IZIAamluoBnZHDwnWHvqrXwoRjSowkXDXLWluPwBpPsPCXjfWy1gNr2UArNa8ou5 dJ+2ewIDAQABoycwJTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH AwIwDQYJKoZIhvcNAQELBQADggEBALf93dvlsT43+k0xpszn4ZNEME4+L7+pQvEi C4kwFM3H4sKJy0JjUGlll+SuTrRk09DJpODhoKn/GnS2sJV5v4zTuFEAJtQynIn0 8H27/XJubl+J34CdYSdjmshNimnciRSeKxXE2BncaNwoPjCxDsDqhVFDQKMkEMp8 jkOzrr8HJPE+vAQ3jLUZ3BxZfUbNZKPA/gGn2jghupxgshAldF1G4RS487e3NpLK ysmKtwBf5FJY8yMZt72K6XEBWDEQOVSwzPjWTmiqfuUj02/mWXgUr5PCY+JE9Dmo 8rs2yiVzIopi3hJK+eMM0XkqWBhVdG20mYTeKdtZ/kAqiVv1naY= -----END CERTIFICATE----- [root@2020 ~]# grep "client-certificate-data" /etc/kubernetes/admin.conf | awk '{print $2}' | base64 -d > kube.crt
然后是key
[root@2020 ~]# grep "client-key-data" /etc/kubernetes/admin.conf | awk '{print $2}' | base64 -d -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAqqOGvimh/QKsCtZJ2UHIVji/bmR4oN2r3Cvq7uS8UtQsqc8O 972i/pjpkg8kUuaiK9Ef2/Ygvtinf8YiVkTFTjZigzWnTwKpY1HELHWVfRx//LMK vJX6R86QYNYD735YCHp4xAKy0Ll2gQ2V54T/aLxzccPlp4nLjKHybcH+Rs+6v/aH u7woxSooo/eI/yBTE3LkzVVHm7PcJYwREizjB4xDbxHfgljZjrbFiSZoQGTCIesR OyGDhIJlmKdapy/UMrYn3UvXF86eR+MgIZIAamluoBnZHDwnWHvqrXwoRjSowkXD XLWluPwBpPsPCXjfWy1gNr2UArNa8ou5dJ+2ewIDAQABAoIBABqqx6n8U6Z4vm5L IutjDm37HF+iL//j5LHZ4zNGZ/AB3KEFDO/GoSxstUPwPdr+1CVI31O+2Us6DKM5 UbBtuvAIK8kZn3YHknVFGAVisuQEijPxvyHNxnlmXMXlbGQHOLbKfQkU6uEXut9c QisWa9vwZ5JF7SQLstXdkUd548Uo/BfyiZ9SbtZPAuMVhhoJ5fSBt5DJh13j3cql l2kj8Ksb83MmA7r4cj4/u7oAQnEgSh1Sj0KpwAxwhBDnGlV0bugqFo7mJW2Sug3F 7eu0WNhSmOoXnQN49oL+NK+Va9YkODk8TPq/UOp690qL+1NB/tf1ptQJC7+Qugtz zCzVezECgYEAwHq2YCK1RaEutMV56GOemHUlGbtuj0XxhTcz/dBYUBqkM4VspX7e Rmzv0AlF6u5n/Xj36X9/b2x0ca5SsWm6LQXeJvnjcw1Mi0kEzeyXVAanYMyqZkID b0HzONPe5TjJr9mIkMnOHexnlXvQi2Dq4DpJiOhomQFf7r3glA9Ciw8CgYEA4vOm mG3iSloTUWxnGiUHpyl8IePu0It3wxjZ5f9o2YkPfZBThUY2G5aBRfDBVaSz8498 6eoZSZQqiNnOAairYix4DdxxB+qCrvPe+oiBlVPTOEAZ1nSxJV4m6/50VZwWrfzF o/m6GQafE32EjMaQ1+/v60KJk1wyPSkTIpSpzdUCgYAD1S22glprtYbxkJEZ4Iny 7To85e+QqMrjZTMC1dg8WBt27yw3q2wPqPGpidW7lN27PWJqYuCNvnIfJWJ+J+XO KbS/v/AYhWZFy8FtvE1THgLNOaYW/S+GUqDeO9HPbK8Pclx2zZ3uGJwDbQC9FcP3 jRGTyVTz3wQjA+Lp79faXwKBgQCiwIQaD8MV+t6bp5eQgjmowPFKBIFAgKPT/0BT 1gPE7Kt1Kkka7CzlP9tY4rxixIhgA+hafwy/XUfbeAZp3iF5d9ZoakuMl7o76Jth Iv96rPBuCFn/FxPqbkiPOJ0Iv7Tr9LdvTikMxVjSy1KA+ezpTiHJnp+2U4mbnpcg V2gmOQKBgDsFtnzWmKBQWYFeDfCw8y6WZ3+0zuuJpCf3GbWIVeIMPmwydrJDUwuW NZ4numVKKrVSSLPG0TPl9vnmTEYCwdyENcWjS8rF6OZTbjT8grZ/X3DA8gAfc5lI ZIFYQ+2FYluZpzk2c9iE46vn/chA/zB0rd04DqhmcgXd+/DuSULc -----END RSA PRIVATE KEY----- [root@2020 ~]# grep "client-key-data" /etc/kubernetes/admin.conf | awk '{print $2}' | base64 -d > kube.key
最后是p12
[root@2020 ~]# openssl pkcs12 -export -clcerts -inkey kube.key -in kube.crt -out kube.p12 -name "kubernetes-client" Enter Export Password: Verifying - Enter Export Password:
将生成的kube.p12导入到本地浏览器里,修改信任
再来通过浏览器选择该证书访问:https://172.16.247.132:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
坑爹的是,依旧没有成功
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kubernetes-dashboard" not found",
reason: "NotFound",
details: {
name: "kubernetes-dashboard",
kind: "services"
},
code: 404
}
这就坑了,居然提示服务找不到,可状态明明是正常的
[root@2020 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9d85f5447-p9jbh 1/1 Running 0 33h kube-system coredns-9d85f5447-tvnd2 1/1 Running 0 33h kube-system etcd-2020 1/1 Running 0 33h kube-system kube-apiserver-2020 1/1 Running 0 33h kube-system kube-controller-manager-2020 1/1 Running 0 33h kube-system kube-proxy-4bs8c 1/1 Running 0 33h kube-system kube-proxy-gf478 1/1 Running 4 22h kube-system kube-scheduler-2020 1/1 Running 0 33h kube-system weave-net-jcn82 2/2 Running 0 33h kube-system weave-net-zmncl 2/2 Running 12 22h kubernetes-dashboard dashboard-metrics-scraper-566cddb686-6q52z 1/1 Running 0 11h kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-vzrr2 1/1 Running 0 11h
查看一下服务详情,才发现坑爹的怎么部署到worker-1这个worker节点上去了
[root@2020 ~]# kubectl describe pod kubernetes-dashboard-7b5bf5d559-vzrr2 --namespace kubernetes-dashboard Name: kubernetes-dashboard-7b5bf5d559-vzrr2 Namespace: kubernetes-dashboard Priority: 0 Node: worker-1/172.16.247.134 Start Time: Thu, 09 Jan 2020 21:24:18 +0800 Labels: k8s-app=kubernetes-dashboard pod-template-hash=7b5bf5d559 Annotations: Status: Running IP: 10.44.0.1 IPs: IP: 10.44.0.1 Controlled By: ReplicaSet/kubernetes-dashboard-7b5bf5d559 Containers: kubernetes-dashboard: Container ID: docker://0aa98d6c683c3b7f2286800af426889d19cb261481e90bf26ebaeb54402c9d2f Image: kubernetesui/dashboard:v2.0.0-beta4 Image ID: docker-pullable://kubernetesui/dashboard@sha256:a35498beec44376efcf8c4478eebceb57ec3ba39a6579222358a1ebe455ec49e Port: 8443/TCP Host Port: 0/TCP Args: --auto-generate-certificates --namespace=kubernetes-dashboard State: Running Started: Thu, 09 Jan 2020 21:25:43 +0800 Ready: True Restart Count: 0 Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-d7fbw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-certs Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kubernetes-dashboard-token-d7fbw: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-d7fbw Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events:
这就蛋碎了,api-server都部署在了master节点,得将dashboard部署到master节点上才能进行转发
看下master节点的详情
[root@2020 ~]# kubectl describe node 2020 Name: 2020 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=2020 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 08 Jan 2020 23:20:30 +0800 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false
Taints这个字段,打了一个标签,master:NoSchedule,也就是新创建的服务不会调度部署在master节点上,再看看worker节点
[root@2020 ~]# kubectl describe node worker-1 Name: worker-1 Roles: worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=worker-1 kubernetes.io/os=linux node-role.kubernetes.io/worker=worker Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 09 Jan 2020 10:57:49 +0800 Taints: <none> Unschedulable: false
的确这里是个none,说明是可以调度上来的
因此先修改一下master的设置,修改为none好了
[root@2020 ~]# kubectl taint nodes 2020 node-role.kubernetes.io/master- node/2020 untainted
查看标签
[root@2020 ~]# kubectl describe node 2020 Name: 2020 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=2020 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 08 Jan 2020 23:20:30 +0800 Taints: <none> Unschedulable: false
修改了标签之后,将仅有的一台worker节点worker-1关掉测试一下,这样就只能调度到master节点了
结果,立马就在master上自动部署了
[root@2020 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9d85f5447-kvq46 1/1 Running 0 34h kube-system coredns-9d85f5447-tflgc 1/1 Running 0 34h kube-system etcd-2020 1/1 Running 0 34h kube-system kube-apiserver-2020 1/1 Running 0 34h kube-system kube-controller-manager-2020 1/1 Running 0 34h kube-system kube-proxy-4bs8c 1/1 Running 0 34h kube-system kube-proxy-gf478 1/1 Running 4 22h kube-system kube-scheduler-2020 1/1 Running 0 34h kube-system weave-net-jcn82 2/2 Running 0 33h kube-system weave-net-zmncl 2/2 Running 12 22h kubernetes-dashboard dashboard-metrics-scraper-566cddb686-pswv7 1/1 Running 0 12h kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-76xkt 1/1 Running 0 9m20s kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-vzrr2 1/1 Terminating 0 12h
试了下,还是not found,因此直接先清理掉dashboard
[root@2020 ~]# kubectl delete -f recommended.yaml
然后最好重启一下master节点,刷新一下
[root@2020 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9d85f5447-kvq46 1/1 Running 0 85m kube-system coredns-9d85f5447-tflgc 1/1 Running 0 85m kube-system etcd-2020 1/1 Running 0 85m kube-system kube-apiserver-2020 1/1 Running 0 85m kube-system kube-controller-manager-2020 1/1 Running 0 85m kube-system kube-proxy-s9xxb 1/1 Running 0 85m kube-system kube-scheduler-2020 1/1 Running 0 85m kube-system weave-net-s64w2 2/2 Running 0 77m kubernetes-dashboard dashboard-metrics-scraper-566cddb686-pswv7 1/1 Running 0 66m kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-76xkt 1/1 Running 0 66m
可是WEB访问还是
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kubernetes-dashboard" not found",
reason: "NotFound",
details: {
name: "kubernetes-dashboard",
kind: "services"
},
code: 404
}
待续……