环境配置

IP hostname 操作系统
10.11.66.44 k8s-master centos7.6
10.11.66.27 k8s-node1 centos7.7
10.11.66.28 k8s-node2 centos7.7
# 官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid的唯一性
[root@localhost ~]# hostnamectl --static set-hostname k8s-master
[root@localhost ~]# hostnamectl --static set-hostname k8s-node1
[root@localhost ~]# hostnamectl --static set-hostname k8s-node2
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@k8s-master ~]# sestatus
SELinux status:                 disabled
[root@k8s-master ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
[root@k8s-master ~]# cat >> /etc/hosts << EOF    # 三台都需要操作
> 10.11.66.44  k8s-master
> 10.11.66.27  k8s-node1
> 10.11.66.28  k8s-node2
> EOF
# ping主机测试hosts是否配置正确
[root@k8s-master ~]# ping k8s-master
PING k8s-master (10.11.66.44) 56(84) bytes of data.
64 bytes from k8s-master (10.11.66.44): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from k8s-master (10.11.66.44): icmp_seq=2 ttl=64 time=0.016 ms
^C
--- k8s-master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.012/0.014/0.016/0.002 ms
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (10.11.66.27) 56(84) bytes of data.
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=1 ttl=64 time=0.924 ms
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=2 ttl=64 time=1.36 ms
^C
--- k8s-node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.924/1.146/1.369/0.225 ms
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (10.11.66.28) 56(84) bytes of data.
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=1 ttl=64 time=1.18 ms
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=2 ttl=64 time=1.30 ms
^C
--- k8s-node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 1.180/1.240/1.300/0.060 ms
[root@k8s-master ~]# ip link   # 三台机器mac地址不能一样
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000link/ether 00:0c:29:26:38:13 brd ff:ff:ff:ff:ff:ff
[root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid    # 三台机器UUID不能一样
07B64D56-0D8B-6047-8E55-9ADE9F263813
# 设置为阿里云yum源(三台都需要操作)
[root@k8s-master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# rm -rf /var/cache/yum && yum makecache && yum -y update && yum -y autoremove
# 注意: 网络条件不好,可以不用 update
# 安装依赖包(三台都需要操作)
[root@k8s-master ~]# yum -y install epel-release.noarch conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
# 配置iptables(三台都需要操作)
[root@k8s-master ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap分区(三台都需要操作)
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 加载内核模块(三台都需要操作)
[root@k8s-node2 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs               # lvs基于4层的负载均衡
modprobe -- ip_vs_rr            # 轮询
modprobe -- ip_vs_wrr           # 加权轮询
modprobe -- ip_vs_sh            # 源地址散列调度算法
modprobe -- nf_conntrack_ipv4   # 链接跟踪模块
modprobe -- br_netfilter        # 遍历桥的数据包由iptables进行处理以进行
EOF[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
# 设置内核参数(三台都需要操作)
[root@k8s-master ~]# cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1     # 设置网桥iptables网络过滤通告
net.bridge.bridge-nf-call-ip6tables = 1    # 设置网桥iptables网络过滤通告
net.ipv4.ip_forward = 1                    # 开启内核转发
net.ipv4.tcp_tw_recycle = 0                # 设置 IP_TW 回收
vm.swappiness = 0                          # 禁用swap
vm.overcommit_memory = 1                   # 内核对内存分配的一种策略
vm.panic_on_oom = 0                        # 设置系统oom(内存溢出)
fs.inotify.max_user_watches = 89100        # 允许用户最大监控目录数
fs.file-max = 52706963                     # 允许系统打开的最大文件数
fs.nr_open = 52706963                      # 允许单个进程打开的最大文件数
net.ipv6.conf.all.disable_ipv6 = 1         # 禁用ipv6
net.netfilter.nf_conntrack_max = 2310720   # 系统的最大连接数# overcommit_memory 是一个内核对内存分配的一种策略,取值又三种分别为0, 1, 2
- overcommit_memory=0   '表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
- overcommit_memory=1   '表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
- overcommit_memory=2   '表示内核允许分配超过所有物理内存和交换空间总和的内存

部署docker

# 卸载旧版docker(三台都需要操作)
[root@k8s-master ~]# yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-selinux \docker-engine-selinux \docker-engine
Loaded plugins: fastestmirror
No Match for argument: docker
No Match for argument: docker-client
No Match for argument: docker-client-latest
No Match for argument: docker-common
No Match for argument: docker-latest
No Match for argument: docker-latest-logrotate
No Match for argument: docker-logrotate
No Match for argument: docker-selinux
No Match for argument: docker-engine-selinux
No Match for argument: docker-engine
No Packages marked for removal
# 安装docker依赖包(三台都需要操作)
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置docker源(阿里云)(三台都需要操作)
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
# 启用测试库(可选)
[root@k8s-master ~]# yum-config-manager --enable docker-ce-edge
[root@k8s-master ~]# yum-config-manager --enable docker-ce-test
# 安装docker(三台都需要操作)
[root@k8s-master ~]# yum makecache fast
[root@k8s-master ~]# yum -y install docker-ce
# 启动docker,并开机自启(三台都需要操作)
[root@k8s-master ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# 配置docker(三台都需要操作)
[root@k8s-master ~]# sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service    # 安装完成后配置启动时的命令,否则 docker 会将 iptables FORWARD chain 的默认策略设置为DROP
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

部署kubeadm和kubelet

# 配置yum源(三台都需要操作)
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装并启动(三台都需要操作)
[root@k8s-master ~]# yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6
[root@k8s-master ~]# systemctl enable kubelet.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# 配置自动补全命令(三台都需要操作)
[root@k8s-master ~]# yum -y install bash-completion
# 设置kubectl与kubeadm命令补全,下次login生效
[root@k8s-master ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
[root@k8s-master ~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
# 查看k8s依赖的包(三台都需要操作)
[root@k8s-master ~]# kubeadm config images list --kubernetes-version v1.18.6
W0803 15:10:18.910528   25638 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
# 拉取所需镜像(三台都需要操作)
[root@k8s-master ~]# vim get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker ImagesKUBE_VERSION=v1.18.6
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
[root@k8s-master ~]# sh get-k8s-images.sh
'或者
[root@k8s-master ~]# docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-images.tar    # master节点导出
[root@k8s-node1 ~]# docker image load -i k8s-images.tar     # node节点导入

初始化集群

# 使用kubeadm init初始化集群,ip为本机ip(在k8s-master上操作)
[root@k8s-master ~]# kubeadm init  --kubernetes-version=v1.18.6 --apiserver-advertise-address=10.11.66.44 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16--kubernetes-version=v1.18.6 : 加上该参数后启动相关镜像(刚才下载的那一堆)
--pod-network-cidr=10.244.0.0/16 :(Pod 中间网络通讯我们用flannel,flannel要求是10.244.0.0/16,这个IP段就是Pod的IP段)
--service-cidr=10.1.0.0/16 : Service(服务)网段(和微服务架构有关)
# 初始化成功后,会有以下回显
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \--discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 为需要使用kubectl的用户进行配置(在k8s-master上操作)
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 使用下面的命令确保所有的Pod都处于Running状态,可能要等到许久
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-cxtrj             0/1     Pending   0          8m14s   <none>        <none>       <none>           <none>
kube-system   coredns-66bff467f8-znlm2             0/1     Pending   0          8m14s   <none>        <none>       <none>           <none>
kube-system   etcd-k8s-master                      1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-apiserver-k8s-master            1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-proxy-vh964                     1/1     Running   0          8m14s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-scheduler-k8s-master            1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
[root@k8s-master ~]# kubectl get pods -n kube-system   # 这个也可以查看
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-cxtrj             0/1     Pending   0          3m52s
coredns-66bff467f8-znlm2             0/1     Pending   0          3m52s
etcd-k8s-master                      1/1     Running   0          4m1s
kube-apiserver-k8s-master            1/1     Running   0          4m1s
kube-controller-manager-k8s-master   1/1     Running   0          4m1s
kube-proxy-vh964                     1/1     Running   0          3m52s
kube-scheduler-k8s-master            1/1     Running   0          4m1s

集群网络配置(选择一种就可以)

flannel 网络

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 注意:修改集群初始化地址及镜像能否拉取(在k8s-master上操作)

Pod Network(使用七牛云镜像)

# (在k8s-master上操作)
[root@k8s-master ~]# curl -o kube-flannel.yml   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# sed -i "s/quay.io\/coreos\/flannel/quay-mirror.qiniu.com\/coreos\/flannel/g" kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# rm -f kube-flannel.yml

calico 网络

# (在k8s-master上操作)
[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.15/manifests/calico.yaml
[root@k8s-master ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"
[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-rchx6   1/1     Running   0          2m31s
calico-node-slgg9                          1/1     Running   0          2m32s
coredns-66bff467f8-cxtrj                   1/1     Running   0          55m
coredns-66bff467f8-znlm2                   1/1     Running   0          55m
etcd-k8s-master                            1/1     Running   0          55m
kube-apiserver-k8s-master                  1/1     Running   0          55m
kube-controller-manager-k8s-master         1/1     Running   0          55m
kube-proxy-vh964                           1/1     Running   0          55m
kube-scheduler-k8s-master                  1/1     Running   0          55m

kubernetes集群中添加node节点

# 在k8s-node1和k8s-node2上,运行之前在k8s-master输出的命令
[root@k8s-node1 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \--discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
[root@k8s-node2 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \--discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 没有记录集群 join 命令的可以通过以下方式重新获取(在k8s-master上操作)
[root@k8s-master ~]# kubeadm token create --print-join-command --ttl=0
[root@k8s-master ~]# kubectl get nodes    # 查看集群中的节点状态,可能要等等许久才Ready
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   64m     v1.18.6
k8s-node1    Ready    <none>   3m37s   v1.18.6
k8s-node2    Ready    <none>   3m36s   v1.18.6

kube-proxy开启ipvs

# (在k8s-master上操作)
[root@k8s-master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@k8s-master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl apply -f kube-proxy-configmap.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-rchx6   1/1     Running   0          14m
calico-node-kfc5p                          1/1     Running   0          7m17s
calico-node-slgg9                          1/1     Running   0          14m
calico-node-xcc92                          1/1     Running   0          7m16s
coredns-66bff467f8-cxtrj                   1/1     Running   0          67m
coredns-66bff467f8-znlm2                   1/1     Running   0          67m
etcd-k8s-master                            1/1     Running   0          67m
kube-apiserver-k8s-master                  1/1     Running   0          67m
kube-controller-manager-k8s-master         1/1     Running   0          67m
kube-proxy-6fnpb                           1/1     Running   0          16s
kube-proxy-tflld                           1/1     Running   0          20s
kube-proxy-x47c8                           1/1     Running   0          26s
kube-scheduler-k8s-master                  1/1     Running   0          67m

部署 kubernetes-dashboard

# Dashboard安装脚本(在k8s-master上操作)
cat > recommended.yaml <<-EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30000selector:k8s-app: kubernetes-dashboard---#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.0.0-beta1imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboard# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: kubernetes-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-metrics-scrapername: kubernetes-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-metrics-scrapertemplate:metadata:labels:k8s-app: kubernetes-metrics-scraperspec:containers:- name: kubernetes-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.0ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30serviceAccountName: kubernetes-dashboard# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule
EOF

创建证书

[root@k8s-master ~]# cd /etc/kubernetes/
[root@k8s-master kubernetes]# mkdir dashboard-certs
[root@k8s-master kubernetes]# cd dashboard-certs/
[root@k8s-master dashboard-certs]# kubectl create namespace kubernetes-dashboard    # 创建命名空间
namespace/kubernetes-dashboard created
[root@k8s-master dashboard-certs]# kubectl get namespace   # 查看命名空间
NAME                   STATUS   AGE
default                Active   75m
kube-node-lease        Active   75m
kube-public            Active   75m
kube-system            Active   75m
kubernetes-dashboard   Active   9s
[root@k8s-master dashboard-certs]# openssl genrsa -out dashboard.key 2048   # 创建私钥key文件
Generating RSA private key, 2048 bit long modulus
........................................+++
..........+++
e is 65537 (0x10001)
[root@k8s-master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'      # 证书请求
[root@k8s-master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt     # 自签证书
Signature ok
subject=/CN=dashboard-cert
Getting Private key
[root@k8s-master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard   # 创建kubernetes-dashboard-certs对象
secret/kubernetes-dashboard-certs created
[root@k8s-master dashboard-certs]# kubectl get secret -A
NAMESPACE              NAME                                             TYPE                                  DATA   AGE
default                default-token-j6m5t                              kubernetes.io/service-account-token   3      77m
kube-node-lease        default-token-n5lxf                              kubernetes.io/service-account-token   3      77m
.........
.........
kubernetes-dashboard   default-token-bjp2p                              kubernetes.io/service-account-token   3      2m33s
kubernetes-dashboard   kubernetes-dashboard-certs                       Opaque                                2      90s

创建 dashboard 管理员

[root@k8s-master dashboard-certs]# cat > dashboard-admin.yaml <<-EOF
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: dashboard-adminnamespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin.yaml
serviceaccount/dashboard-admin created

为用户分配权限

[root@k8s-master dashboard-certs]# cat > dashboard-admin-bind-cluster-role.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: dashboard-admin-bind-cluster-rolelabels:k8s-app: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin-bind-cluster-role.yaml
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created

安装 Dashboard

[root@k8s-master dashboard-certs]# kubectl apply -f /root/recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/kubernetes-metrics-scraper created
[root@k8s-master dashboard-certs]# kubectl get pods -A
NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-578894d4cd-rchx6      1/1     Running   0          29m
kube-system            calico-node-kfc5p                             1/1     Running   0          22m
kube-system            calico-node-slgg9                             1/1     Running   0          29m
kube-system            calico-node-xcc92                             1/1     Running   0          22m
kube-system            coredns-66bff467f8-cxtrj                      1/1     Running   0          82m
kube-system            coredns-66bff467f8-znlm2                      1/1     Running   0          82m
kube-system            etcd-k8s-master                               1/1     Running   0          82m
kube-system            kube-apiserver-k8s-master                     1/1     Running   0          82m
kube-system            kube-controller-manager-k8s-master            1/1     Running   0          82m
kube-system            kube-proxy-6fnpb                              1/1     Running   0          15m
kube-system            kube-proxy-tflld                              1/1     Running   0          15m
kube-system            kube-proxy-x47c8                              1/1     Running   0          15m
kube-system            kube-scheduler-k8s-master                     1/1     Running   0          82m
kubernetes-dashboard   kubernetes-dashboard-84b6b4578b-8t9bp         1/1     Running   0          75s
kubernetes-dashboard   kubernetes-metrics-scraper-86f6785867-bqvpg   1/1     Running   0          75s
[root@k8s-master dashboard-certs]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE    SELECTOR
dashboard-metrics-scraper   ClusterIP   10.1.16.181   <none>        8000/TCP        2m6s   k8s-app=kubernetes-metrics-scraper
kubernetes-dashboard        NodePort    10.1.99.111   <none>        443:30000/TCP   2m6s   k8s-app=kubernetes-dashboard

查看并复制用户Token

[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-528w2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661deType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw

访问测试

1、浏览器访问:https://10.11.66.44:30000/
2、选择token,输入上面输出的token

用文件认证登录

导出认证

[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-528w2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661deType:  kubernetes.io/service-account-tokenData
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
ca.crt:     1025 bytes
[root@k8s-master ~]# vim .kube/config
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZmk1aXZZNkxXb0F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1ETXdOelF5TlRoYUZ3MHlNVEE0TURNd056UXpNREJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpLazFvSnVPenQ3R3kzWnIKYjY5UkFqOXpzZ0hsNDdBOVVGOGIvQm1oYjVZalAwNTZuSG5FUVg4Qi85eDRaQmI0U2VLOTZkVVhIaTlFcEZuUQpDUlNKTFUwNnFRcW1GeUdXc1JJcEJPVDlUQmtrSW1XM25aRFZvKzI2dWFnVEp0V1BsOWtaWHZ5Z1hGUkJxeDNYCkxvTHIwZ2FrWE56dWd6TzBhMnFwQ1hQK0xmTE1Pa2gzUlJRZmQ4NUtaWWFXcWhNSStjNkZEVGtnTi84Z3BNKzYKWkE0a0UzT0x3OWFORkpvakl2amNIY1h5N0RNdGxCaFVRZVU4bEk2NHVRVk9zcDllTDR2WjBFRmo1djZFejNnbwp4ZFYrbzd6NWd3N3pzUENrdlJjc3RRcVhSRnV6emlpTVVQQTRDbzFhZkt3R1VZcmtBbmNzZnQxbVhGb2V3WDFPCjkwQ2xod0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDd0lKc0JreEV4UXBpeW8zTkNmQmkrL3hOQ0U3YnpNLzhmRAp4Q0VwQlZ0MWR1NkU1ZFdJQy82a3B0OVZzNHhHc1gvVVA4aUNaejVHZmtxT1JmTklDM0dZUFZJWlhNTUN2RHp0CnFubkk0Z1p2YXhyMnNoSDNpVkw2Rzd0Y2hCZmNJV0J4K1lnTEt3ZW9iTDUvaUorbXJmT2xsNXV4eit6cGUveHIKTjArWWVsTXJBaS9PeWpJR1N0WjVOblRzcnVILzZVRXRFZUwwRE9WQ0FrR3JQYnlkQVdNQUxaeWlQMTU4bCticQpNRkFkMHc2ZG82R3R2NlRCMGVaaXdzT1RHVzN6Ti85YlZWS2NFcGIzaE1MVVk0YVhvNC9laXl6TnF6MzlDdEpBCklPb3djOEFuakdGRDYraUdKbWU0VVdXcUxzMDI5US82eXF6WWFsUmFqWkwyL2FkNHRuaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBektrMW9KdU96dDdHeTNacmI2OVJBajl6c2dIbDQ3QTlVRjhiL0JtaGI1WWpQMDU2Cm5IbkVRWDhCLzl4NFpCYjRTZUs5NmRVWEhpOUVwRm5RQ1JTSkxVMDZxUXFtRnlHV3NSSXBCT1Q5VEJra0ltVzMKblpEVm8rMjZ1YWdUSnRXUGw5a1pYdnlnWEZSQnF4M1hMb0xyMGdha1hOenVnek8wYTJxcENYUCtMZkxNT2toMwpSUlFmZDg1S1pZYVdxaE1JK2M2RkRUa2dOLzhncE0rNlpBNGtFM09MdzlhTkZKb2pJdmpjSGNYeTdETXRsQmhVClFlVThsSTY0dVFWT3NwOWVMNHZaMEVGajV2NkV6M2dveGRWK283ejVndzd6c1BDa3ZSY3N0UXFYUkZ1enppaU0KVVBBNENvMWFmS3dHVVlya0FuY3NmdDFtWEZvZXdYMU85MENsaHdJREFRQUJBb0lCQVFDMHlLZXhkcGZnanhObAp1UFpRVXJvcFZTbDZ6WWhuNTA5U0JxR3V3R2xGSzRkNUxYYkxjQmgzanB5U2lncml4eE9PR0xlUHJZYmRSLzNICmUvcHpldXR0MC9HRVR2N0dJZ3A5NGIvUUxnSzl6TnVKY3ZhT1Bka3FGQjVFVDM2VGFFU09hdHlwZGxpbEZseG4KcmxWZEpaTHdGS1B0ejg3MG9LQzMzaUR4VTcvc2p4MWUwc3FFQ1NMdW5aY2FiaWJtYUpjT2RXYk0yM3JBdEdYQQp0YlFIYVZneHJldEZFREx0Ym9IMFB3Qit3eFNHdFh4WUFwSXR0RkowNWM3QWc1OVhWSFc2akdiYWd2VVlPcDFQCmdGVndSbjdwT1daNlNHTDBqdXgvbTl2UzZoakZ1aVVhVXhkM2ZOSVNKbUljRjZ2MTlmVTQwV3kyYXBCK1B0bHIKOU5zM2RpSGhBb0dCQU01ZW9QcFNGNmp0U1V0NTlERktZdUJUUG9wQWxiZFlxM0QvWnVBQlpkaFdJWXNoS1JvRwpUSGhjaTFlKzBPbmZlZ2pvMzhGM0syaHVJRVdrNEFhQ25QaWVyRWc3Yk1mVjNkMjYyNHBFeGRBN3J5Y1JvaWJuClJlTVA5K1BvVy9IaXJVQW4wUFdyRFUydEpLekxwNlhCcnozeE02VmFiWGxFcnNnZ0pybHN1cEwzQW9HQkFQM2gKWW5QLzVWWHBWeUtvMkhuZEEwWkwwK0pscFhNeFY4NDA4ZE1QMXE1WkVQbkZ2aVNXVjlLdFJVa3lCR2ZDUW1WeApEWkp3KzBRcmZUbXV5elZ6aUFZTFJJbHJKZ285QmN0NmRGUmpFaUo4NkVIeGdlV1J5UkhmaUZqalhqSXlCVGYyCmFxOGM2UlBTZmEyTEh1SVBlZEZVY2lrN0Z5WDg4dzJabkpBcjJFM3hBb0dBWnBOVWtuZkJlTjdRNHFvd2ZWdUwKQUJPQWIzbWdzU3hxc3RUUURxSERQSis3Tm90NkFZeUY4QUdYNVRwY1h4TU1kbWRCNk1qU0U2dEJjVHg5ZWQ3cwpKUXZCZUhuSkhSOHBrMit3ZGU2dklFeTZSOElWQmg5SWRvOVdXTHNERUp6cUhveHI2ZUJtMFdneFpZNG91MVFsClJiV2hSUnhJYzlGMnl0Um9TeHhITklzQ2dZQmRxSFQ2bUMrUmx3aG5KK1RjYUJWYUxJVVpJeWg3SzN2Wi9ad3MKb2M0ditYbVN1MGxmRS91SUpCWElYK1JTSnM3NXYxQWpjdnl1OUdBNUZHdXc1MU1KNzhRejhjeFJ3SnRQcW5nWgozWWFHSkpCR0s0TWhIcndQbE9nbTZwSUljSDJPWEtDVXcxU1UxSFU2dlhVQ0xuVmhMUWNFZ09FVVNaR2N0Y3VWClFDZUc4UUtCZ0UrMkFrZTR3QlRnZDhuZFhlTHRPcHBRZ21IUVViZUN1elZyRzFEVEJxam0rcVpnSzhKR2RUdXIKUDhybjY3TGNFSFpyRlJVODEwQXJUNU92QXRGOTlnU0dnKzd1Q2x5bzJtVGtxZWRIUTZ6RVZld0JUQlFQUEx1VAp6UGRYbjl5cTZSaVZPajU1QUROdmFuNXdQNUE3clRSTGZjNXZqQWRmV3hmYUZqYVIxNE85Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
[root@k8s-master ~]# cp .kube/config /usr/local/k8s-dashboard.kubeconfig
[root@k8s-master ~]# cd /usr/local/
[root@k8s-master local]# ll
total 8
drwxr-xr-x. 2 root root    6 Apr 11  2018 bin
drwxr-xr-x. 2 root root    6 Apr 11  2018 etc
drwxr-xr-x. 2 root root    6 Apr 11  2018 games
drwxr-xr-x. 2 root root    6 Apr 11  2018 include
-rw-------  1 root root 6425 Aug  3 17:48 k8s-dashboard.kubeconfig
drwxr-xr-x. 2 root root    6 Apr 11  2018 lib
drwxr-xr-x. 2 root root    6 Apr 11  2018 lib64
drwxr-xr-x. 2 root root    6 Apr 11  2018 libexec
drwxr-xr-x. 2 root root    6 Apr 11  2018 sbin
drwxr-xr-x. 5 root root   49 Mar 30  2019 share
drwxr-xr-x. 2 root root    6 Apr 11  2018 src
[root@k8s-master local]# sz k8s-dashboard.kubeconfig
# 登录的时候,选择文件认证的方式登录即可

安装部署 metrics-server 插件

链接:https://pan.baidu.com/s/1QRndSG88L5w-_DHfMxrd_g
提取码:62dj
复制这段内容后打开百度网盘手机App,操作更方便哦
[root@k8s-master ~]# unzip metrics-server-master.zip
[root@k8s-master ~]# cd metrics-server-master/deploy/1.8+/
[root@k8s-master 1.8+]# ll
total 28
-rw-r--r-- 1 root root  397 Nov 12  2019 aggregated-metrics-reader.yaml
-rw-r--r-- 1 root root  303 Nov 12  2019 auth-delegator.yaml
-rw-r--r-- 1 root root  324 Nov 12  2019 auth-reader.yaml
-rw-r--r-- 1 root root  298 Nov 12  2019 metrics-apiservice.yaml
-rw-r--r-- 1 root root 1091 Nov 12  2019 metrics-server-deployment.yaml
-rw-r--r-- 1 root root  297 Nov 12  2019 metrics-server-service.yaml
-rw-r--r-- 1 root root  517 Nov 12  2019 resource-reader.yaml

修改安装脚本

[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:name: metrics-servernamespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:name: metrics-servernamespace: kube-systemlabels:k8s-app: metrics-server
spec:selector:matchLabels:k8s-app: metrics-servertemplate:metadata:name: metrics-serverlabels:k8s-app: metrics-serverspec:serviceAccountName: metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers- name: tmp-diremptyDir: {}containers:- name: metrics-serverimage: mirrorgooglecontainers/metrics-server-amd64:v0.3.6   # # 修改镜像下载地址args:   # 添加以下内容- --cert-dir=/tmp- --secure-port=4443- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameports:- name: main-portcontainerPort: 4443protocol: TCPsecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000imagePullPolicy: AlwaysvolumeMounts:- name: tmp-dirmountPath: /tmp
# 执行脚本
[root@k8s-master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@k8s-master 1.8+]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master   887m         22%    1701Mi          59%
k8s-node1    158m         7%     954Mi           35%
k8s-node2    137m         6%     894Mi           32%
# 以下情况表示还没创建完成,等待1-3分钟即可
[root@k8s-master 1.8+]# kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
[root@k8s-master 1.8+]#
[root@k8s-master 1.8+]# kubectl top nodes
error: metrics not available yet

Kubeadm部署-Kubernetes-1.18.6集群相关推荐

  1. kubeadm 部署kubernetes 1.26.1集群 Calico BGP ToR配置

    目录 机器信息 升级内核 系统配置 部署容器运行时Containerd 安装crictl客户端命令 配置服务器支持开启ipvs的前提条件 安装 kubeadm.kubelet 和 kubectl 初始 ...

  2. kubeadm 部署 kubernetes:v1.23.4集群

    一.安装前的准备 !!!以下操作需要在所有master和node上执行 1.1.关闭selinux,关闭防火墙 1.2.添加hosts解析 192.168.122.160 master 192.168 ...

  3. 一份详尽的利用 Kubeadm部署 Kubernetes 1.13.1 集群指北

    2019独角兽企业重金招聘Python工程师标准>>> 概 述 Kubernetes集群的搭建方法其实有多种,比如我在之前的文章<利用K8S技术栈打造个人私有云(连载之:K8S ...

  4. 利用 Kubeadm部署 Kubernetes 1.13.1 集群实践录

    概 述 Kubernetes集群的搭建方法其实有多种,比如我在之前的文章<利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)>中使用的就是二进制的安装方法.虽然这种方法有利于我们理解 ...

  5. 使用Kubeadm搭建Kubernetes(1.12.2)集群

    Kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,在2018年将进入GA状态,说明离生产环境中使用的距离越来 ...

  6. 使用二进制包在生产环境部署 Kubernetes v1.13.2 集群

    文章目录 使用二进制包在生产环境部署 Kubernetes v1.13.2 集群 一 背景 二 环境及架构图 2.1 软件环境 2.2 服务器规划 2.3 节点或组件功能简介 2.4 Kubernet ...

  7. 使用kubeadm部署k8s(2、k8s集群部署)

    1.kube-proxy开启ipvs的前置条件 默认情况下,Kube-proxy将在kubeadm部署的集群中以iptables模式运行,需要注意的是,当内核版本大于4.19时,移除了nf_connt ...

  8. Kubespray v2.22.1 在线部署 kubernetes v1.26.5 集群

    文章目录 1. 介绍 2. 预备条件 3. 配置 hostname 4. yum 5. 下载 kubespray 6. 编写 inventory.ini 7. 配置互信 8. 安装 ansible 9 ...

  9. centos7.4 kubeadm安装Kubernetes 1.14.1 集群

    硬件准备 [2台hosts内容一样] [root@kuber-node1 /]# cat /etc/hosts 127.0.0.1 localhost 10.26.3.182 kuber-node1 ...

  10. kubeadm部署k8s1.9高可用集群--1集群概述

    前言 k8s部署的方式多种多样,除去各家云厂商提供的工具,在bare metal中,也有二进制部署和一系列的自动化部署工具(kubeadm,kubespary,rke等).具体二进制部署大家可以参考宋 ...

最新文章

  1. 使用多个DNS供应商以缓解DDoS攻击
  2. 【android-tips】Activity间数据传递之Bundle和SharedPreferences
  3. java获取系统属性_Java获取系统属性
  4. weblogic创建域后启动不了_摩托车淋雨后启动不了什么原因?如何解决?
  5. ssis导出数据性能_使用SSIS Hadoop组件导入和导出数据
  6. python ----元组方法以及修改细节
  7. [翻译]Hystrix wiki–How it Works
  8. 【数据结构的魅力】001.认识复杂度二分法异或运算
  9. java设计模式学习3--Command Pattern[原创]
  10. OSPF Sham-Link
  11. pci-e串口卡linux 驱动下载,PCI/PCIe卡驱动
  12. 计算机专业职业生涯规划书结束语,职业生涯规划书结束语
  13. php201534,PHP设计聊天室步步通4
  14. 竹间智能:人机交互未来如何改变人类生活
  15. 如何写出优秀的数据报告分析
  16. 元神一直显示连接服务器失败,原神连接服务器失败怎么办
  17. 联通电信校园促销加码 策略更隐蔽
  18. 解决挂过代理之后ip不变
  19. 事件冒泡详解及阻止事件冒泡
  20. mac上截图的快捷键以及一些快捷键使用

热门文章

  1. 学习中的一些笔记,不懂的时候再来翻翻(持续更新)
  2. Dacom G150双模耳机,为爱发声,呵护孩子听力健康成长
  3. 骑行听音乐用什么耳机,盘点几款适合在出行佩戴的耳机
  4. gamemaker学习笔记:拖拽
  5. 【虚幻引擎UE】UE5 UMG布局和视觉设计(自适应篇)
  6. python爬虫——利用超级鹰识别验证码并进行古诗网进行模拟登录
  7. 低延迟平价游戏蓝牙耳机推荐,2021值得入手的五款品牌蓝牙耳机
  8. 商简智能学术成果|基于深度强化学习的联想电脑制造调度(Lenovo Schedules Laptop Manufacturing Using Deep Reinforcement Learning)
  9. MOOS-ivp简介
  10. jks文件转换成ctr,key文件