kubeadm部署kubernetes v1.17.4 单master节点

环境说明:

#操作系统:centos7
#docker版本:19.03.8
#kubernetes版本:v1.17.4
#K8S master 节点IP:192.168.3.62
#K8S worker节点IP:192.168.2.186
#网络插件:flannel
#kube-proxy网络转发: ipvs
#kubernetes源:使用阿里云源
#service-cidr:10.96.0.0/16
#pod-network-cidr:10.244.0.0/16

部署准备:

操作在所有节点进行修改内核参数:关闭swapvim /etc/sysctl.confvm.swappiness=0net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1sysctl -p临时生效swapoff -a && sysctl -w vm.swappiness=0修改 fstab 不在挂载 swapvi /etc/fstab/dev/mapper/centos-swap swap swap defaults 0 0安装dockeryum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo添加docker配置mkdir -p /etc/dockervim /etc/docker/daemon.json{"max-concurrent-downloads": 20,"data-root": "/apps/docker/data","exec-root": "/apps/docker/root","registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],"log-driver": "json-file","bridge": "docker0","oom-score-adjust": -1000,"debug": false,"log-opts": {"max-size": "100M","max-file": "10"},"default-ulimits": {"nofile": {"Name": "nofile","Hard": 1024000,"Soft": 1024000},"nproc": {"Name": "nproc","Hard": 1024000,"Soft": 1024000},"core": {"Name": "core","Hard": -1,"Soft": -1}}}安装依赖yum install -y yum-utils ipvsadm telnet wget net-tools conntrack ipset jq iptables curl sysstat libseccomp socat nfs-utils fuse fuse-devel 安装docker依赖yum install -y python-pip python-devel yum-utils device-mapper-persistent-data lvm2安装docker yum install -y docker-cereload service 配置systemctl daemon-reload重启dockersystemctl restart docker设置开机启动systemctl enable docker自动加载ipvs 创建开机加载cat << EOF > /etc/sysconfig/modules/ipvs.modules!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF/etc/sysconfig/modules/ipvs.modules 可执行权限chmod +x /etc/sysconfig/modules/ipvs.modules执行 /etc/sysconfig/modules/ipvs.modules/etc/sysconfig/modules/ipvs.modules----------------------------------# kubernetes 源配置cat << EOF | tee /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF# 安装kubeadm kubelet kubectlyum install -y kubeadm kubelet kubectl 

初始化kubernetes

 # master节点初始化 kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.17.4 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16#初始化内容:[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=0.0.0.0 > --apiserver-cert-extra-sans=127.0.0.1 > --image-repository=registry.aliyuncs.com/google_containers > --ignore-preflight-errors=all > --kubernetes-version=v1.17.4 > --service-cidr=10.96.0.0/16 > --pod-network-cidr=10.244.0.0/16W0321 15:48:22.675239 3126 validation.go:28] Cannot validate kube-proxy config - no validator is availableW0321 15:48:22.675321 3126 validation.go:28] Cannot validate kubelet config - no validator is available[init] Using Kubernetes version: v1.17.4[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-tc]: tc not found in system path [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 192.168.1.169:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.62 127.0.0.1][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.62 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.62 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0321 15:49:11.580457 3126 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0321 15:49:11.581881 3126 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 30.003890 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: fzyao0.q90my43drmpbstgw[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.3.62:6443 --token fzyao0.q90my43drmpbstgw --discovery-token-ca-cert-hash sha256:d7fb17be78dbaf019433e3d97423ab35d42800221a77f0b7d486e1c3e2544437# 记录好最下面语句初始化Worker点使用设置开机启动systemctl enable kubelet

cp K8S 使用config 文件以后操作集群使用

 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config# 验证master 是否部署成功[root@k8s-master ~]# kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy okscheduler Healthy oketcd-0 Healthy {"health":"true"}[root@k8s-master ~]# kubectl cluster-infoKubernetes master is running at https://192.168.3.62:6443KubeDNS is running at https://192.168.3.62:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use ‘kubectl cluster-info dump‘.# 查看POD 是否部署[root@k8s-master ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-9d85f5447-5gltw 0/1 ContainerCreating 0 6m4skube-system coredns-9d85f5447-v5q5l 0/1 ContainerCreating 0 6m4skube-system etcd-k8s-master 1/1 Running 0 6m5skube-system kube-apiserver-k8s-master 1/1 Running 0 6m5skube-system kube-controller-manager-k8s-master 1/1 Running 0 6m5skube-system kube-proxy-frk98 1/1 Running 0 6m4skube-system kube-scheduler-k8s-master 1/1 Running 0 6m5s# ContainerCreating 状态 是网络插件没部署[root@k8s-master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master NotReady master 6m59s v1.17.4#NotReady 是启用cni 没cni 配置文件 网络插件部署好进入正常

修改kube-proxy 数据转发为IPvs

# 查找kube-proxy 配置文件kubectl -n kube-system get cm[root@k8s-master ~]# kubectl -n kube-system get cmNAME DATA AGEcoredns 1 9m46sextension-apiserver-authentication 6 9m51skube-proxy 2 9m46skubeadm-config 2 9m48skubelet-config-1.17 1 9m48s# 配置文件对应 kube-proxy # 修改kube-proxy 配置文件kubectl -n kube-system edit cm kube-proxy# Please edit the object below. Lines beginning with a ‘#‘ will be ignored,# and an empty file will abort the edit. If an error occurs while saving this file will be# reopened with the relevant failures.#apiVersion: v1data: config.conf: |- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: masqueradeAll: true # 增加 –masquerade-all 选项,以确保反向流量通过 masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s kind: KubeProxyConfiguration metricsBindAddress: "0.0.0.0" #启用kube-proxy 监控指标metrics mode: "ipvs" # 开启ipvs 模式 nodePortAddresses: null oomScoreAdj: null portRange: "" udpIdleTimeout: 0s winkernel: enableDSR: false networkName: "" sourceVip: "" kubeconfig.conf: |- apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://192.168.3.62:6443 name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenkind: ConfigMapmetadata: creationTimestamp: "2020-03-21T07:49:43Z" labels: app: kube-proxy name: kube-proxy namespace: kube-system resourceVersion: "228" selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy uid: 4267c9ed-34f6-44d4-95ee-21cda3c4ba64 # 其它的根据自己需要进行修改# 保存修改文件:wq![root@k8s-master ~]# kubectl -n kube-system edit cm kube-proxyconfigmap/kube-proxy edited# 删除已经运行的kube-proxy pod[root@k8s-master ~]# kubectl -n kube-system get podNAME READY STATUS RESTARTS AGEcoredns-9d85f5447-5gltw 0/1 ContainerCreating 0 21mcoredns-9d85f5447-v5q5l 0/1 ContainerCreating 0 21metcd-k8s-master 1/1 Running 0 21mkube-apiserver-k8s-master 1/1 Running 0 21mkube-controller-manager-k8s-master 1/1 Running 0 21mkube-proxy-frk98 1/1 Running 0 21mkube-scheduler-k8s-master 1/1 Running 0 21m# 删除POD[root@k8s-master ~]# kubectl -n kube-system delete pod kube-proxy-frk98pod "kube-proxy-frk98" deleted# 查看POD 是否启动成功[root@k8s-master ~]# kubectl -n kube-system get pod|grep kube-proxykube-proxy-kkdl2 1/1 Running 0 49s # 查看IPvs 是否启动成功[root@k8s-master ~]# ip a | grep ipvs4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0 inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0[root@k8s-master ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.96.0.1:443 rr -> 192.168.3.62:6443 Masq 1 0 0TCP 10.96.0.10:53 rrTCP 10.96.0.10:9153 rrUDP 10.96.0.10:53 rr# 一切正常

部署网络插件flannel

# 你也可以部署你想用的插件vim kube-flannel.yml---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelrules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannelsubjects:- kind: ServiceAccount name: flannel namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: flannel namespace: kube-system---kind: ConfigMapapiVersion: v1metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flanneldata: cni-conf.json: | { "name":"cni0", "cniVersion":"0.3.1", "plugins":[ { "type":"flannel", "delegate":{ "forceAddress":false, "hairpinMode": true, "isDefaultGateway":true } }, { "type":"portmap", "capabilities":{ "portMappings":true } }, { "name": "mytuning", "type": "tuning", "sysctl": { "net.core.somaxconn": "65535", "net.ipv4.ip_local_port_range": "1024 65535", "net.ipv4.tcp_keepalive_time": "600", "net.ipv4.tcp_keepalive_probes": "10", "net.ipv4.tcp_keepalive_intvl": "30" } } ] } net-conf.json: | { "Network": "10.244.0.0/16", # 记得修改成你自己POD cird "Backend": { "Type": "VXLAN", "Directrouting": true } }---apiVersion: apps/v1kind: DaemonSetmetadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannelspec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg# 部署 flannel kubectl apply -f kube-flannel.yml[root@k8s-master ~]# kubectl apply -f kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds-amd64 created# 查看flannel 是否部署成功[root@k8s-master ~]# kubectl -n kube-system get pod|grep flannelkube-flannel-ds-amd64-6lmq7 1/1 Running 0 54s# 查看网卡ip a5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 4a:9c:77:15:fc:af brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::489c:77ff:fe15:fcaf/64 scope link valid_lft forever preferred_lft forever6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff inet 10.244.0.1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::5861:f7ff:febf:8c96/64 scope link valid_lft forever preferred_lft forever# 查看node 状态 跟刚刚不能不是pod 状态[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master Ready master 30m v1.17.4# 查看POD 状态[root@k8s-master ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-9d85f5447-5gltw 1/1 Running 0 30mkube-system coredns-9d85f5447-v5q5l 1/1 Running 0 30mkube-system etcd-k8s-master 1/1 Running 0 30mkube-system kube-apiserver-k8s-master 1/1 Running 0 30mkube-system kube-controller-manager-k8s-master 1/1 Running 0 30mkube-system kube-flannel-ds-amd64-6lmq7 1/1 Running 0 2m55skube-system kube-proxy-kkdl2 1/1 Running 0 8m49skube-system kube-scheduler-k8s-master 1/1 Running 0 30m# coredns 已经正常部署完成[root@k8s-master ~]# ipvsadm -ln -cIPVS connection entriespro expire state source virtual destinationTCP 14:58 ESTABLISHED 10.244.0.3:59832 10.96.0.1:443 192.168.3.62:6443TCP 14:40 ESTABLISHED 10.96.0.1:53774 10.96.0.1:443 192.168.3.62:6443TCP 14:58 ESTABLISHED 10.244.0.2:42436 10.96.0.1:443 192.168.3.62:6443[root@k8s-master ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.96.0.1:443 rr -> 192.168.3.62:6443 Masq 1 3 0TCP 10.96.0.10:53 rr -> 10.244.0.2:53 Masq 1 0 0 -> 10.244.0.3:53 Masq 1 0 0TCP 10.96.0.10:9153 rr -> 10.244.0.2:9153 Masq 1 0 0 -> 10.244.0.3:9153 Masq 1 0 0UDP 10.96.0.10:53 rr -> 10.244.0.2:53 Masq 1 0 0 -> 10.244.0.3:53 Masq 1 0 0[root@k8s-master ~]# dig @10.96.0.10 www.baidu.com; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> @10.96.0.10 www.baidu.com; (1 server found);; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18800;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 13, ADDITIONAL: 27;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096; COOKIE: b405a7dd858ce3f72674a79e5e75ce9641c697ff59132605 (good);; QUESTION SECTION:;www.baidu.com. IN A;; ANSWER SECTION:www.baidu.com. 30 IN CNAME www.a.shifen.com.www.a.shifen.com. 30 IN CNAME www.wshifen.com.www.wshifen.com. 30 IN A 103.235.46.39# dns 正常解析

部署Worker 节点

# 192.168.2.186 节点上面操作kubeadm join 192.168.3.62:6443 --token fzyao0.q90my43drmpbstgw --discovery-token-ca-cert-hash sha256:d7fb17be78dbaf019433e3d97423ab35d42800221a77f0b7d486e1c3e2544437[root@nginx-1 ~]# kubeadm join 192.168.3.62:6443 --token fzyao0.q90my43drmpbstgw > --discovery-token-ca-cert-hash sha256:d7fb17be78dbaf019433e3d97423ab35d42800221a77f0b7d486e1c3e2544437W0321 16:23:21.870029 1700 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service‘[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes‘ on the control-plane to see this node join the cluster.# 查看节点部署是否正常[root@k8s-master ~]# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEk8s-master Ready master 35m v1.17.4 192.168.3.62 <none> CentOS Linux 8 (Core) 4.18.0-147.5.1.el8_1.x86_64 docker://19.3.8nginx-1 Ready <none> 64s v1.17.4 192.168.2.186 <none> CentOS Linux 7 (Core) 3.10.0-1062.1.2.el7.x86_64 docker://19.3.8# 节点已经正常设置开机启动systemctl enable kubelet

注意事项

#kubeadm token 默认时间是24 小时,过期记得从新生成token 然后加入节点# 查看tokenkubeadm token list# 创建tokenkubeadm token create#忘记初始master节点时的node节点加入集群命令怎么办# 简单方法kubeadm token create --print-join-command# 第二种方法token=$(kubeadm token generate)kubeadm token create $token --print-join-command --ttl=0# 接下来就可以部署监控,第一个应用等等

相关文章