Kubernetes集群:Worker节点部署

有了Master节点,Worker节点就简单了

首先还是部署环境:

VMware Fusion:CentOS7虚拟机,这里就安装了一个Mini版,需要什么再自行进行安装

网络:NAT方式,最终用Host-Only应该也OK

资源:1C,2G,20G,后面如果可以部署集群再适当分配

其次一些系统配置还是和Master节点一样:

1、关闭swap分区,/etc/fstab里注释下面一行

#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2、关闭selinux,/etc/selinux/config里修改下面一行

SELINUX=disabled

3、修改iptables内核参数和开启路由转发,如果嫌烦部署的话先直接关掉firewalld好了,一堆端口要放开

[root@Worker-1 lihui]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
et.ipv4.ip_forward=1
vm.swappiness=0
[root@Worker-1 lihui]# sysctl --system

然后就配置yum源直接安装

docker-ce源:

[root@Worker-1 lihui]# cd /etc/yum.repos.d/
[root@Worker-1 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@Worker-1 yum.repos.d]# yum clean all
[root@Worker-1 yum.repos.d]# yum repolist

kubernetes源:

[root@Worker-1 lihui]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enable=1

安装和启动服务

yum install docker-ce kubeadm
systemctl start docker
systemctl start kubelet

关闭防火墙,添加开机启动

[Worker-1@2020 ~]# systemctl stop firewalld
[Worker-1@2020 ~]# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
[root@Worker-1 ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@Worker-1 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

接下来就是Worker节点真正的部署阶段,其实一步就行了

在部署Master节点的时候,init阶段最后有提示过一个命令,执行一下

kubeadm join 172.16.247.132:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8bb43d2d4c8aa40f19831be3cf0ae7b8a6a4e78bf40a7d53f20e93db6079f499

无误之后,在master节点上查看pods状态

[root@2020 ~]# kubectl get pods -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
coredns-9d85f5447-p9jbh        1/1     Running   0          21h
coredns-9d85f5447-tvnd2        1/1     Running   0          21h
etcd-2020                      1/1     Running   0          21h
kube-apiserver-2020            1/1     Running   0          21h
kube-controller-manager-2020   1/1     Running   0          21h
kube-proxy-4bs8c               1/1     Running   0          21h
kube-proxy-gf478               1/1     Running   4          9h
kube-scheduler-2020            1/1     Running   0          21h
weave-net-jcn82                2/2     Running   0          20h
weave-net-zmncl                2/2     Running   12         9h

可以看到kube-proxy-gf478和weave-net-zmncl就是新增的

再来看看node状态,发现了一个None

[root@2020 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
2020       Ready    master   21h   v1.17.0
worker-1   Ready    <none>   9h    v1.17.0

直接修改一下Worker节点的ROLES

[root@2020 ~]# kubectl label node worker-1 node-role.kubernetes.io/worker=worker

再来查看一下node状态,就正常显示了

[root@2020 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
2020       Ready    master   21h   v1.17.0
worker-1   Ready    worker   9h    v1.17.0

Worker-1节点详情

[root@2020 ~]# kubectl describe node worker-1
Name:               worker-1
Roles:              worker
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=worker-1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/worker=worker
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 09 Jan 2020 10:57:49 +0800
Taints: 
Unschedulable:      false
Lease:
  HolderIdentity:  worker-1
  AcquireTime: 
  RenewTime:       Thu, 09 Jan 2020 20:37:17 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Thu, 09 Jan 2020 20:15:25 +0800   Thu, 09 Jan 2020 20:15:25 +0800   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Thu, 09 Jan 2020 20:35:11 +0800   Thu, 09 Jan 2020 20:15:07 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Thu, 09 Jan 2020 20:35:11 +0800   Thu, 09 Jan 2020 20:15:07 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Thu, 09 Jan 2020 20:35:11 +0800   Thu, 09 Jan 2020 20:15:07 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Thu, 09 Jan 2020 20:35:11 +0800   Thu, 09 Jan 2020 20:15:07 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.16.247.134
  Hostname:    worker-1
Capacity:
  cpu:                1
  ephemeral-storage:  17394Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2027940Ki
  pods:               110
Allocatable:
  cpu:                1
  ephemeral-storage:  16415037823
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1925540Ki
  pods:               110
System Info:
  Machine ID:                 f84147f4754740ec8c40efd52518b640
  System UUID:                1C8A4D56-996E-8F52-0187-07A1C19BBCFC
  Boot ID:                    14a1c611-57c2-4d5b-9704-7655b9d8ae58
  Kernel Version:             3.10.0-1062.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.5
  Kubelet Version:            v1.17.0
  Kube-Proxy Version:         v1.17.0
Non-terminated Pods:          (2 in total)
  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                ------------  ----------  ---------------  -------------  ---
  kube-system                 kube-proxy-gf478    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9h
  kube-system                 weave-net-zmncl     20m (2%)      0 (0%)      0 (0%)           0 (0%)         9h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                20m (2%)  0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:
  Type     Reason                   Age                From                  Message
  ----     ------                   ----               ----                  -------
  Normal   Starting                 37m                kubelet, worker-1     Starting kubelet.
  Normal   NodeAllocatableEnforced  37m                kubelet, worker-1     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  37m (x2 over 37m)  kubelet, worker-1     Node worker-1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    37m (x2 over 37m)  kubelet, worker-1     Node worker-1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     37m (x2 over 37m)  kubelet, worker-1     Node worker-1 status is now: NodeHasSufficientPID
  Warning  Rebooted                 37m                kubelet, worker-1     Node worker-1 has been rebooted, boot id: 7846c587-b0cf-45e3-9719-b8f8f5370074
  Normal   NodeReady                37m                kubelet, worker-1     Node worker-1 status is now: NodeReady
  Normal   Starting                 37m                kube-proxy, worker-1  Starting kube-proxy.
  Warning  Rebooted                 36m                kubelet, worker-1     Node worker-1 has been rebooted, boot id: b04b67ed-ac1e-449b-8c50-9b490c3d4fff
  Normal   NodeAllocatableEnforced  36m                kubelet, worker-1     Updated Node Allocatable limit across pods
  Normal   Starting                 36m                kubelet, worker-1     Starting kubelet.
  Normal   NodeHasNoDiskPressure    36m (x2 over 36m)  kubelet, worker-1     Node worker-1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientMemory  36m (x2 over 36m)  kubelet, worker-1     Node worker-1 status is now: NodeHasSufficientMemory
  Normal   NodeReady                36m                kubelet, worker-1     Node worker-1 status is now: NodeReady
  Normal   NodeHasSufficientPID     36m (x2 over 36m)  kubelet, worker-1     Node worker-1 status is now: NodeHasSufficientPID
  Normal   Starting                 36m                kube-proxy, worker-1  Starting kube-proxy.
  Normal   Starting                 22m                kubelet, worker-1     Starting kubelet.
  Normal   NodeAllocatableEnforced  22m                kubelet, worker-1     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  22m (x2 over 22m)  kubelet, worker-1     Node worker-1 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet, worker-1     Node worker-1 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     22m (x2 over 22m)  kubelet, worker-1     Node worker-1 status is now: NodeHasSufficientPID
  Warning  Rebooted                 22m                kubelet, worker-1     Node worker-1 has been rebooted, boot id: 14a1c611-57c2-4d5b-9704-7655b9d8ae58
  Normal   NodeReady                22m                kubelet, worker-1     Node worker-1 status is now: NodeReady
  Normal   Starting                 22m                kube-proxy, worker-1  Starting kube-proxy.

OVER

发表回复