概述

kubeadm 已⽀持集群部署,且在1.13 版本中 GA,⽀持多 master,多 etcd 集群化部署,它也是官⽅最为推荐的部署⽅式,⼀来是由它的 sig 组来推进的,⼆来 kubeadm 在很多⽅⾯确实很好的利⽤了 kubernetes 的许多特性,接下来⼏篇我们来实践并了解下它的魅⼒。

⽬标

1. 通过 kubeadm 搭建⾼可⽤ kubernetes 集群,并新建管理⽤户

2. 为后续做版本升级演示,此处使⽤1.13.1版本,到下⼀篇再升级到 v1.14

3. kubeadm 的原理解读

本⽂主要介绍 kubeadm 对⾼可⽤集群的部署

kubeadm 部署 k8s v1.13 ⾼可⽤集群

⽅式有两种

  • Stacked etcd topology

  • 即每台 etcd 各⾃独⽴,分别部署在 3 台 master 上,互不通信,优点是简单,缺点是缺乏 etcd ⾼可⽤性

  • 需要⾄少 4 台机器(3master 和 etcd,1node)

  • External etcd topology

  • 即采⽤集群外 etcd 拓扑结构,这样的冗余性更好,但需要⾄少7台机器(3master,3etcd,1node)

  • ⽣产环境建议采⽤该⽅案

  • 本⽂也采⽤这个拓扑

步骤

  • 环境准备

  • 安装组件:docker,kubelet,kubeadm(所有节点)

  • 使⽤上述组件部署 etcd ⾼可⽤集群

  • 部署 master

  • 加⼊node

  • ⽹络安装

  • 验证

  • 总结

机环境准备

  • 系统环境
#操作系统版本(⾮必须,仅为此处案例)
$cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
#内核版本(⾮必须,仅为此处案例)
$uname -r
4.17.8-1.el7.elrepo.x86_64#数据盘开启ftype(在每台节点上执⾏)
umount /data
mkfs.xfs -n ftype=1 -f /dev/vdb
#禁⽤swap
swapoff -a
sed -i "s#^/swapfile#\#/swapfile#g" /etc/fstab
mount -a

docker,kubelet,kubeadm 的安装(所有节点)

  • 安装运⾏时(docker)

    • k8s1.13 版本根据官⽅建议,暂不采⽤最新的 18.09,这⾥我们采⽤18.06,安装时需指定版本

    • 来源:kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version.

    • 安装脚本如下(在每台节点上执⾏):
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repoyum makecache fastyum install -y --setopt=obsoletes=0 \docker-ce-18.06.1.ce-3.el7systemctl start docker
systemctl enable docker
  • 安装 kubeadm,kubelet,kubectl

    • 官⽅的 Google yum 源⽆法从国内服务器上直接下载,所以可先在其他渠道下载好,在上传到服务器上
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
$ yum -y install --downloadonly --downloaddir=k8s kubelet-1.13.1 kubeadm-1.13.1
kubectl-1.13.1
$ ls k8s/
25cd948f63fea40e81e43fbe2e5b635227cc5bbda6d5e15d42ab52decf09a5ac-kubelet-1.13.1-
0.x86_64.rpm
53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools-
1.12.0-0.x86_64.rpm
5af5ecd0bc46fca6c51cc23280f0c0b1522719c282e23a2b1c39b8e720195763-kubeadm-1.13.1-
0.x86_64.rpm
7855313ff2b42ebcf499bc195f51d56b8372abee1a19bbf15bb4165941c0229d-kubectl-1.13.1-
0.x86_64.rpm
fe33057ffe95bfae65e2f269e1b05e99308853176e24a4d027bc082b471a07c0-kubernetes-cni-
0.6.0-0.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
  • 本地安装
# 禁⽤selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# 本地安装
yum localinstall -y k8s/*.rpm
systemctl enable --now kubelet
  • ⽹络修复,已知 centos7 会因 iptables 被绕过⽽将流量错误路由,因此需确保sysctl 配置中的 net.bridge.bridgenf-call-iptables 被设置为 1
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

使⽤上述组件部署 etcd ⾼可⽤集群

1. 在 etcd 节点上,将 etcd 服务设置为由 kubelet 启动管理

cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifestpath=/etc/kubernetes/manifests --allow-privileged=true
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet

2. 给每台 etcd 主机⽣成 kubeadm 配置⽂件,确保每台主机运⾏⼀个 etcd 实例:在 etcd1(即上述的 hosts0)上执⾏上述

命令,可以在 /tmp ⽬录下看到⼏个主机名的⽬录

# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts
export HOST0=10.10.184.226
export HOST1=10.10.213.222
export HOST2=10.10.239.108# Create temp directories to store files that will end up on other hosts.
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("infra0" "infra1" "infra2")for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:local:serverCertSANs:- "${HOST}"peerCertSANs:- "${HOST}"extraArgs:initial-cluster:
${NAMES[0]}=https://${ETCDHOSTS[0]}:2380,${NAMES[1]}=https://${ETCDHOSTS[1]}:2380,${N
AMES[2]}=https://${ETCDHOSTS[2]}:2380initial-cluster-state: newname: ${NAME}listen-peer-urls: https://${HOST}:2380listen-client-urls: https://${HOST}:2379advertise-client-urls: https://${HOST}:2379initial-advertise-peer-urls: https://${HOST}:2380
EOF
done

3. 制作CA:在host0上执⾏命令⽣成证书,它将创建两个⽂件:/etc/kubernetes/pki/etcd/ca.crt/etc/kubernetes/pki/etcd/ca.key (这⼀步需要×××)

[root@10-10-184-226 ~]# kubeadm init phase certs etcd-ca
[certs] Generating "etcd/ca" certificate and key

4. 在 host0 上给每个 etcd 节点⽣成证书:

export HOST0=10.10.184.226
export HOST1=10.10.213.222
export HOST2=10.10.239.108
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --
config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# cleanup non-reusable certificates
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -deletekubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --
config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -deletekubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --
config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0# clean up certs that should not be copied off this host
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete
  • 将证书和 kubeadmcfg.yaml 下发到各个 etcd 节点上效果为
/root/
└── kubeadmcfg.yaml
---
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd├── ca.crt├── ca.key├── healthcheck-client.crt├── healthcheck-client.key├── peer.crt├── peer.key├── server.crt└── server.key

5. ⽣成静态 pod manifest ,在 3 台 etcd 节点上分别执⾏:(需×××)

$ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

6. 检查 etcd 集群状态,⾄此 etcd 集群搭建完成

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes
k8s.gcr.io/etcd:3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --keyfile /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --
endpoints https://${HOST0}:2379 cluster-health
member 9969ee7ea515cbd2 is healthy: got healthy result from
https://10.10.213.222:2379
member cad4b939d8dfb250 is healthy: got healthy result from
https://10.10.239.108:2379
member e6e86b3b5b495dfb is healthy: got healthy result from
https://10.10.184.226:2379
cluster is healthy

使⽤ kubeadm 部署 master

  • 将任意⼀台 etcd 上的证书拷⻉到 master1 节点
export CONTROL_PLANE="ubuntu@10.0.0.7"
+scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
+scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
+scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
  • 在第⼀台 master 上编写配置⽂件 kubeadm-config.yaml 并初始化
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
#kubernetesVersion: 1.13.1
#注意这⾥我们想先安装1.13.1再后续升级1.13.4所以直接声明版本号
apiServer:certSANs:- "k8s.paas.test"
controlPlaneEndpoint: "k8s.paas.test:6443"
etcd:external:endpoints:- https://10.10.184.226:2379- https://10.10.213.222:2379- https://10.10.239.108:2379caFile: /etc/kubernetes/pki/etcd/ca.crtcertFile: /etc/kubernetes/pki/apiserver-etcd-client.crtkeyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:podSubnet: "10.244.0.0/16"

注意:这⾥的 k8s.paas.test:6443 是⼀个 LB,如果没有可⽤虚拟 IP 来做

  • 使⽤私有仓库(⾃定义镜像功能) kubeadm ⽀持通过修改配置⽂件中的参数来灵活定制集群初始化⼯作,如 imageRepository 可以设置镜像前缀,我们可以在将镜像传到⾃⼰内部私服上之后,编辑 kubeadm-config.yaml 中的该参数之后再执⾏ init

  • 在 master1 上执⾏:kubeadm init --config kubeadm-config.yaml
Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join k8s.paas.test:6443 --token f1oygc.3zlc31yjcut46prf --discoverytoken-ca-cert-hash xx[root@k8s-m1 ~]$kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-m1 NotReady master 4m54s v1.13.1

安装另外2台 master

  • master1 上的 admin.conf 配置⽂件和 pki 相关证书拷⻉到另外 2 台 master 同样⽬录下如:
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key (官⽅⽂档中此处需要拷⻉,但实际不需要)
/etc/kubernetes/admin.conf

注:官⽹⽂档少了两个⽂件/etc/kubernetes/pki/apiserver-etcd-client.crt/etc/kubernetes/pki/apiserver-etcdclient.key,不加 apiserver 会启动失败并报错:

Unable to create storage backend: config (&{ /registry []
/etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcdclient.crt /etc/kubernetes/pki/etcd/ca.crt true 0xc000133c20 <nil> 5m0s 1m0s}),
err (open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or
directory)
  • 在 2 和 3 master 中执⾏加⼊:
  kubeadm join k8s.paas.test:6443 --token f1oygc.3zlc31yjcut46prf --discovery-tokenca-cert-hash sha256:078b63e29378fb6dcbedd80dd830b83e37521f294b4e3416cd77e854041d912f
--experimental-control-plane

加⼊ node 节点

[root@k8s-n1 ~]
$ kubeadm join k8s.paas.test:6443 --token f1oygc.3zlc31yjcut46prf --discovery-tokenca-cert-hash sha256:078b63e29378fb6dcbedd80dd830b83e37521f294b4e3416cd77e854041d912f
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "k8s.paas.test:6443"
[discovery] Created cluster-info discovery client, requesting info from
"https://k8s.paas.test:6443"
[discovery] Requesting info from "https://k8s.paas.test:6443" again to validate TLS
against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate
validates against pinned roots, will use API Server "k8s.paas.test:6443"
[discovery] Successfully established connection with API Server "k8s.paas.test:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm
kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13"
ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the
Node API object "k8s-n1" as an annotationThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.

网络安装

kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878d
b11b/Documentation/kube-flannel.yml
kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE
IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-86c58d9df4-dc4t2 1/1 Running 0 14m
172.17.0.3 k8s-m1 <none> <none>
kube-system coredns-86c58d9df4-jxv6v 1/1 Running 0 14m
172.17.0.2 k8s-m1 <none> <none>
kube-system kube-apiserver-k8s-m1 1/1 Running 0 13m
10.10.119.128 k8s-m1 <none> <none>
kube-system kube-apiserver-k8s-m2 1/1 Running 0 5m
10.10.76.80 k8s-m2 <none> <none>
kube-system kube-apiserver-k8s-m3 1/1 Running 0 4m58s
10.10.56.27 k8s-m3 <none> <none>
kube-system kube-controller-manager-k8s-m1 1/1 Running 0 13m
10.10.119.128 k8s-m1 <none> <none>
kube-system kube-controller-manager-k8s-m2 1/1 Running 0 5m
10.10.76.80 k8s-m2 <none> <none>
kube-system kube-controller-manager-k8s-m3 1/1 Running 0 4m58s
10.10.56.27 k8s-m3 <none> <none>
kube-system kube-flannel-ds-amd64-nvmtk 1/1 Running 0 44s
10.10.56.27 k8s-m3 <none> <none>
kube-system kube-flannel-ds-amd64-pct2g 1/1 Running 0 44s
10.10.76.80 k8s-m2 <none> <none>
kube-system kube-flannel-ds-amd64-ptv9z 1/1 Running 0 44s
10.10.119.128 k8s-m1 <none> <none>
kube-system kube-flannel-ds-amd64-zcv49 1/1 Running 0 44s
10.10.175.146 k8s-n1 <none> <none>
kube-system kube-proxy-9cmg2 1/1 Running 0 2m34s
10.10.175.146 k8s-n1 <none> <none>
kube-system kube-proxy-krlkf 1/1 Running 0 4m58s
10.10.56.27 k8s-m3 <none> <none>
kube-system kube-proxy-p9v66 1/1 Running 0 14m
10.10.119.128 k8s-m1 <none> <none>
kube-system kube-proxy-wcgg6 1/1 Running 0 5m
10.10.76.80 k8s-m2 <none> <none>
kube-system kube-scheduler-k8s-m1 1/1 Running 0 13m
10.10.119.128 k8s-m1 <none> <none>
kube-system kube-scheduler-k8s-m2 1/1 Running 0 5m
10.10.76.80 k8s-m2 <none> <none>
kube-system kube-scheduler-k8s-m3 1/1 Running 0 4m58s
10.10.56.27 k8s-m3 <none> <none>

安装完成

验证

  • ⾸先验证kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
$kubectl create deployment nginx --image=nginx:alpine
$kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
nginx-54458cd494-r6hqm 1/1 Running 0 5m24s 10.244.4.2 k8s-n1
<none> <none>
  • kube-proxy 验证
$kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed[root@k8s-m1 ~]
$kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 122m
nginx NodePort 10.108.192.221 <none> 80:30992/TCP 4s
[root@k8s-m1 ~]
$kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
nginx-54458cd494-r6hqm 1/1 Running 0 6m53s 10.244.4.2 k8s-n1
<none> <none>$curl -I k8s-n1:30992
HTTP/1.1 200 OK
  • 验证 dns,pod ⽹络状态
kubectl run --generator=run-pod/v1 -it curl --image=radial/busyboxplus:curl
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$ nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: nginx
Address 1: 10.108.192.221 nginx.default.svc.cluster.local
  • ⾼可⽤

关机 master1,在⼀个随机 pod 中访问 Nginx 关机 master1,在⼀个随机 pod 中访问 Nginx

while true;do curl -I nginx && sleep 1 ;done

总结

  • 关于版本

    • 内核版本 4.19 更加稳定此处不建议4.17(再新就是5了)

    • docker 最新稳定版是 1.17.12,此处是 1.18.06,虽 k8s 官⽅也确认兼容1.18.09 了但⽣产上还是建议 1.17.12
  • 关于⽹络,各家选择不同,flannel在中⼩公司较为普遍,但部署前要选好⽹络插件,在配置⽂件中提前设置好(官⽅博客⼀开始的kubeadm配置中没写,后⾯在⽹络设置中⼜要求必须加)

  • 出错处理

    • 想重置环境的话,kubeadm reset 是个很好的⼯具,但它并不会完全重置,在etcd 中的的部分数据(如 configmap secure 等)是没有被清空的,所以如果有必要重置真个环境,记得在 reset 后将 etcd 也重置。

    • 重置 etcd 办法为清空 etcd 节点 /var/lib/etcd,重启 docker 服务)
  • ×××

    • 镜像:kubeadm 已⽀持⾃定义镜像前缀,kubeadm-config.yaml 中设置 imageRepository 即可

    • yum,可提前下载导⼊,也可以设置 http_proxy 来访问

    • init,签发证书和 init 时需要连接 google,也可以设置 http_proxy来访问。
  • 更多

    • 证书、升级问题将在下⼀篇继续介绍。

参考:

  • https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/

51Reboot Python 运维开发18天训练营1999特价倒计时4天

网络班|面授班同步,有想要咨询的WeChat:17812796384

转载于:https://blog.51cto.com/51reboot/2379722

用 kubeadm 部署生产级 k8s 集群相关推荐

  1. kubeadm部署k8s_用 kubeadm 部署生产级 k8s 集群

    概述 kubeadm 已⽀持集群部署,且在1.13 版本中 GA,⽀持多 master,多 etcd 集群化部署,它也是官⽅最为推荐的部署⽅式,⼀来是由它的 sig 组来推进的,⼆来 kubeadm ...

  2. Kubeadm部署高可用K8S集群

    目录: 1. 设备清单 2. 初始环境准备 2.1 主机名解析 2.2 关闭防火墙及SElinux 2.3 关闭NetworkManager 2.4 进行时间同步 2.5 升级系统(可选,我没做)命令 ...

  3. k8s.5-使用RKE构建企业生产级Kubernetes集群

    使用RKE构建企业生产级Kubernetes集群 一.RKE工具介绍 RKE是一款经过CNCF认证的开源Kubernetes发行版,可以在Docker容器内运行. 它通过删除大部分主机依赖项,并为部署 ...

  4. CentOS7中使用kubeadm快速部署一套K8S集群

    一.Kubernetes概述 1.1 Kubernetes是什么 Kubernetes是Google在2014年开源的一个容器集群管理系统,Kubernetes简称K8S K8S用于容器化应用程序的部 ...

  5. 万级K8s集群背后etcd稳定性及性能优化实践

    作者:唐聪, 腾讯 CSIG 后台开发工程师 本文旨在帮助大家了解 etcd集群场景下稳定性与性能优化经验引的容量,避免给后面留坑. 背景与挑战 随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云 ...

  6. Kubernetes学习-K8S安装篇-Kubeadm安装高可用K8S集群

    Kubernetes学习-K8S安装篇-Kubeadm高可用安装K8S集群 1. Kubernetes 高可用安装 1.1 kubeadm高可用安装k8s集群1.23.1 1.1.1 基本环境配置 1 ...

  7. 正式压力测试:locust进阶,超简单搭建生产级locust集群

    locust进阶,搭建生产级locust集群 本教程基于k8s集群搭建,使用helm作为包管理工具 通过helm安装locust

  8. 在阿里云上部署生产级别Kubernetes集群

    阿里云是国内非常受欢迎的基础云平台,随着Kubernetes的普及,越来越多的企业开始筹划在阿里云上部署自己的Kubernetes集群. 本文将结合实战中总结的经验,分析和归纳一套在阿里云上部署生产级 ...

  9. Kubeadm部署Kubernetes1.18.6集群1

    Kubeadm 部署 Kubernetes1.18.6 集群 一.环境说明 主机名 IP地址 角色 系统 k8s-node-1 192.168.120.128 k8s-master Centos7.6 ...

最新文章

  1. Redhat 7使用CentOS 7的Yum网络源
  2. 算法---------路径总和
  3. 你可能不知道的 10 条 SQL 技巧
  4. 自定义泛型集合,接口
  5. halcon旋转后坐标_FPGA大赛【八】具体模块设计图像旋转方案
  6. Kubernetes 学习总结(19)—— Kubernetes 集群管理平台如何选择?Rancher vs KubeSphere
  7. 为什么程序员愿意加入管理糟糕的创业公司?
  8. 下一个最大2的幂 判断是否是2的幂
  9. java 获取保存存储路径配置文件
  10. 经验模态分解python_EMD经验模态分解
  11. 用c语言做判断题题库及答案,C语言题库及答案(选择题)
  12. EXCEL 在空值中批量填入某个数值
  13. lambda在python中的意思_Python中lambda x:x0 是什么意思?
  14. android怎么点击换图片,ImageButton单击切换按钮图片
  15. 算法笔记.胡凡 第九章 二叉树
  16. 成都榆熙教育咨询有限公司:拼多多电商新经济,个人电商如何参与?
  17. 五大常用算法——分治算法详解及经典例题
  18. 【Unity XR】Unity开发OpenXR
  19. 首发!2022高考数学压轴题解析!
  20. 新巴巴运动网 项目第三天

热门文章

  1. memory内存占用过高 解决方法
  2. 【源码篇】Kafka客户端发送消息
  3. 扩散模型(Diffusion Models)
  4. 网络安全:细致分析与防范策略
  5. 故障:卡死原因及解决
  6. STM32——超声波测距模块(HAL库CubeMx)
  7. Java—反射机制实战及动态代理总结
  8. 用python画小星星
  9. Win10系统出现edge主页被劫持篡改的问题
  10. 使用 Chrome 浏览器插件 Web Scraper 10分钟轻松实现网页数据的爬取