Kubernetes kubeadm 对集群进行版本升级
kubeadm对K8s集群进行版本升级
升级策略:
- 始终保持最新
- 每半年升级一次,这样会落后社区1~2个小版本
- 一年升级一次,或者更长,落后版本太多
![](/assets/blank.gif)
- 升级前必须备份所有组件及数据,例如etcd
- 千万不要跨多个小版本进行升级,例如从1.16升级到1.19(比如现在有1.19,1.20,1.21,1.22,不要从1.19到1.22,因为官方对每个API版本的改动只兼容两个小版本,比如从1.19到1.22跨越多个版本,那么可能导致在1.22里面弃用了某个api,或者某个参数,那么原先在1.19当中可以使用的功能在1.22当中导致不可以使用)所以建议一个一个小版本升级,如1.19-->1.20,1.20可以正常使用再到1.21
升级管理节点
查看当前版本
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 274d v1.19.0
k8s-node1 Ready <none> 274d v1.19.0
k8s-node2 Ready <none> 274d v1.19.0
1、查找最新版本号 查看kubeadm工具
[root@k8s-master ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors* base: mirrors.ustc.edu.cn* extras: mirrors.ustc.edu.cn* updates: mirrors.bfsu.edu.cn
Installed Packages
kubeadm.x86_64 1.19.0-0 @kubernetes
Available Packages
kubeadm.x86_64 1.18.10-0 kubernetes
kubeadm.x86_64 1.18.12-0 kubernetes
kubeadm.x86_64 1.19.0-0 kubernetes
kubeadm.x86_64 1.19.1-0 kubernetes
kubeadm.x86_64 1.19.2-0 kubernetes
kubeadm.x86_64 1.19.3-0 kubernetes
kubeadm.x86_64 1.19.4-0 kubernetes
[root@k8s-master ~]# yum install -y kubeadm-1.19.4-0 --disableexcludes=kubernetes[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:15:05Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubectl drain k8s-master --ignore-daemonsets
node/k8s-master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-6hgrq, kube-system/kube-proxy-xth6p
node/k8s-master drained#可以看到节点为不可调度,pod不会分配到该节点
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready,SchedulingDisabled master 274d v1.19.0
k8s-node1 Ready <none> 274d v1.19.0
k8s-node2 Ready <none> 274d v1.19.0
[root@k8s-master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.0
[upgrade/versions] kubeadm version: v1.19.4
I0816 20:44:13.219092 56342 version.go:252] remote version is much newer: v1.22.0; falling back to: stable-1.19
[upgrade/versions] Latest stable version: v1.19.14
[upgrade/versions] Latest stable version: v1.19.14
[upgrade/versions] Latest version in the v1.19 series: v1.19.14
[upgrade/versions] Latest version in the v1.19 series: v1.19.14Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 3 x v1.19.0 v1.19.14Upgrade to the latest version in the v1.19 series:COMPONENT CURRENT AVAILABLE
kube-apiserver v1.19.0 v1.19.14
kube-controller-manager v1.19.0 v1.19.14
kube-scheduler v1.19.0 v1.19.14
kube-proxy v1.19.0 v1.19.14
CoreDNS 1.7.0 1.7.0
etcd 3.4.9-1 3.4.13-0You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.19.14Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.14._____________________________________________________________________The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
上面输出了升级命令
You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.19.14
5、执行升级
[root@k8s-master ~]# kubeadm upgrade apply v1.19.4[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.4". Enjoy![upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@k8s-master ~]# kubectl uncordon k8s-master
node/k8s-master uncordoned
7、升级kubelet和kubectl
[root@k8s-master ~]# yum install -y kubelet-1.19.4-0 kubectl-1.19.4-0 --disableexcludes=kubernetes
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart kubelet[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 274d v1.19.4
k8s-node1 Ready <none> 274d v1.19.0
k8s-node2 Ready <none> 274d v1.19.0
可以看到使用新的镜像拉起的
[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-5c6f6b67db-5wc2g 1/1 Running 0 8m52s 10.244.169.143 k8s-node2 <none> <none>
calico-node-6hgrq 1/1 Running 2 274d 192.168.179.102 k8s-master <none> <none>
calico-node-jxh4t 1/1 Running 2 274d 192.168.179.103 k8s-node1 <none> <none>
calico-node-xjklb 1/1 Running 3 274d 192.168.179.104 k8s-node2 <none> <none>
coredns-6d56c8448f-6x65s 1/1 Running 0 8m52s 10.244.169.145 k8s-node2 <none> <none>
coredns-6d56c8448f-xjt6g 1/1 Running 0 8m53s 10.244.235.193 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 0 27m 192.168.179.102 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 25m 192.168.179.102 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 1 25m 192.168.179.102 k8s-master <none> <none>
kube-proxy-p77r2 1/1 Running 0 24m 192.168.179.104 k8s-node2 <none> <none>
kube-proxy-pghrf 1/1 Running 0 23m 192.168.179.102 k8s-master <none> <none>
kube-proxy-vvpn4 1/1 Running 0 24m 192.168.179.103 k8s-node1 <none> <none>
kube-scheduler-k8s-master 1/1 Running 1 25m 192.168.179.102 k8s-master <none> <none>
kuboard-74c645f5df-l7rmf 1/1 Running 1 271d 10.244.169.138 k8s-node2 <none> <none>
metrics-server-7dbf6c4558-lm972 1/1 Running 0 8m53s 192.168.179.104 k8s-node2 <none> <none>
升级工作节点
[root@k8s-node1 ~]# yum install -y kubeadm-1.19.4-0 --disableexcludes=kubernetes
[root@k8s-master ~]# kubectl drain k8s-node1 --ignore-daemonsets --delete-local-data
node/k8s-node1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-jxh4t, kube-system/kube-proxy-vvpn4
evicting pod kubernetes-dashboard/dashboard-metrics-scraper-7b59f7d4df-5tzgb
evicting pod kube-system/calico-kube-controllers-5c6f6b67db-q5qb6
evicting pod kube-system/coredns-6d56c8448f-ddt97
evicting pod kube-system/coredns-6d56c8448f-lwn8m
evicting pod kube-system/metrics-server-7dbf6c4558-sw8w8
pod/metrics-server-7dbf6c4558-sw8w8 evicted
pod/coredns-6d56c8448f-ddt97 evicted
pod/dashboard-metrics-scraper-7b59f7d4df-5tzgb evicted
pod/calico-kube-controllers-5c6f6b67db-q5qb6 evicted
pod/coredns-6d56c8448f-lwn8m evicted
node/k8s-node1 evicted
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 274d v1.19.4
k8s-node1 Ready,SchedulingDisabled <none> 274d v1.19.0
k8s-node2 Ready <none> 274d v1.19.0
[root@k8s-master ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.19.4"...
Static pod: kube-apiserver-k8s-master hash: 45ace2daaf2d9063c22f1e458122b22e
Static pod: kube-controller-manager-k8s-master hash: 3f1f0783a4c18b360e0847ad1bc080ce
Static pod: kube-scheduler-k8s-master hash: 6bfd1888d95f430b2d7d2b7faa87eade
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version "3.4.13-0" is not newer than the currently installed "3.4.13-0". Skipping etcd upgrade
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests966354998"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
4、升级kubelet和kubectl
[root@k8s-node1 ~]# yum install -y kubelet-1.19.4-0 kubectl-1.19.4-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.nju.edu.cn* extras: mirrors.nju.edu.cn* updates: mirrors.nju.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.19.0-0 will be updated
---> Package kubectl.x86_64 0:1.19.4-0 will be an update
---> Package kubelet.x86_64 0:1.19.0-0 will be updated
---> Package kubelet.x86_64 0:1.19.4-0 will be an update
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart kubelet
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 274d v1.19.4
k8s-node1 Ready <none> 274d v1.19.4
k8s-node2 Ready <none> 274d v1.19.0
master节点涉及etcd,controller-manager,等组件镜像更新了,而node节点只是将kubelet和kubectl软件包更新了
现到测试环境进行测试,然后到生产环境选一部分节点进行测试,没有问题再批量升级。
K8s集群节点正确下线流程
如果你想维护某个节点或者删除节点,正确流程如下:
Kubernetes kubeadm 对集群进行版本升级相关推荐
- 使用kubeadm安装kubernetes高可用集群
kubeadm安装kubernetes高可用集群搭建 第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...
- 使用 kubeadm 创建 kubernetes 1.9 集群
简介 kubeadm是一个kubernetes官方提供的快速安装和初始化拥有最佳实践(best practice)的kubernetes集群的工具,虽然目前还处于 beta 和 alpha 状态,还不 ...
- 容器编排——Kubeadm在线或离线搭建kubernetes高可用集群
目录 1.架构简介: 2.集群架构图: 3.集群服务器: 4.修改主机名称: 5.修改hosts配置文件: 6.关闭selinux: 7.关闭防火墙: 8.关闭swap: 9.设置iptables网桥 ...
- 【云原生 · Kubernetes】kubeadm创建集群
文章目录 1. 安装kubeadm 1.1 基础环境 1.2 安装kubelet.kubeadm.kubectl 2 使用kubeadm引导集群 2.1 下载各个机器需要的镜像 2.2初始化主节点 2 ...
- 二进制搭建kubernetes多master集群【三、配置k8s master及高可用】
前面两篇文章已经配置好了etcd和flannel的网络,现在开始配置k8s master集群. etcd集群配置参考:二进制搭建kubernetes多master集群[一.使用TLS证书搭建etcd集 ...
- docker 如何加入kubernetes_使用 Kind 在 5 分钟内快速部署一个 Kubernetes 高可用集群...
什么是 Kind Kind(Kubernetes in Docker) 是一个Kubernetes孵化项目,Kind是一套开箱即用的Kubernetes环境搭建方案.顾名思义,就是将Kubernete ...
- CentOS 使用二进制部署 Kubernetes 1.13集群
CentOS 使用二进制部署 Kubernetes 1.13集群 一.概述 kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本.Kubernetes 1.13 ...
- Kubeadm创建集群
目录标题 1.安装kubeadm 1.基础环境 2.安装kubelet.kubeadm.kubectl 2.使用kubeadm引导集群 1.下载各个机器需要的镜像 2.初始化主节点 3.根据提示继续 ...
- kubeadm 安装集群 1.16.12
kubeadm 安装集群 添加源 mirror=https://mirrors.aliyun.com # mirror=https://mirrors.ustc.edu.cn### docker 源 ...
最新文章
- 李开复:天才将占领创业领域
- 华为鸿蒙15日上市,华为鸿蒙什么时候上市
- 利用事件冒泡和阻止事件冒泡的例子
- mantis1.18升级1.2X方法
- Java学习笔记三——数据类型
- 【CodeForces - 1020A】New Building for SIS(模拟)
- 音视频中的CBR,VBR,ABR
- IDEA远程连接mysq数据库
- virtualbox版oracle RAC环境搭建
- php ci sql性能时间,Codeigniter操作数据库表的优化写法总结
- xcode4 引入poco库
- ORACLE检查点测试,oracle深度解析检查点
- 制造业升级智造业,阿里云提炼了9大场景
- od 调试java_OD调试初步概念——修改窗口标题
- base64编码相关-btoa和atob及中文乱码报错问题
- android 全局换字体,Android 全局替换字体
- 我要的仅此而已:伤感QQ心情日志
- 同一计算机打印机无法连接,打印机无法连接到计算机怎么处理呢?
- vue请求后台数据的几种方式
- 数据库存储过程语法总结