前言

至少准备三台centos服务器,其中一台为master节点,两台work节点;centos系统版本为7.5或以上版本;我这里使用的是7.9,除此之外,还需要一些额外的条件

  • 至少2核2G的配置(单核不行的,我试过了)

一、k8s环境准备

运行k8s的服务需要具备以下条件

  1. 必须是基于Debian和Red Hat的linux发行版以及一些不提供句管理的发行版,这些系统才提供通用指令
  2. 每台主机至少具备2G内存;2核CPU
  3. 最好关闭防火墙
  4. 节点中不能有重复的主机名、mac地址或product_uuid;

接下来,在所有的节点中配置和安装以下几项

1、每个系统都设置唯一的静态ip

用vi编辑器打开网卡配置/etc/sysconfig/network-scripts/ifcfg-ens33

# 将 BOOTPROTO 改为static
BOOTPROTO=static# 添加ip、网关和DNS地址,网关可以通过命令:“netstat -rn” 查看
IPADDR=192.168.253.131
GATEWAY=192.168.253.2
DNS1=8.8.8.8

2、时间同步

k8s要求集群中的节点必须精确一致,所以直接使用chronyd从网络同步时间

# 启动chronyd服务
systemctl start chronyd# 设为开机自启
systemctl enable chronyd# 查看当前时间
date

3、重新设置主机名

在k8s中, 主机名不能重复,所以将其设为不一样的节点

# 主节点
hostnamectl set-hostname master# 工作节点1
hostnamectl set-hostname node1# 工作节点2
hostnamectl set-hostname node2

4、设置hosts域名映射

在所有节点上执行以下命令,在hosts文件中添加一个指向主节点的域名,这里不要照抄,要将ip改成你自己的ip

# 在所有节点执行
echo "192.168.253.131 cluster-endpoint" >> /etc/hosts

在初始化的时候需要用到master,所以需要hosts加入以下配置,对应的主机名要和上面第三步的一样

# 以下操作只在主节点操作
echo "192.168.253.131 master" >> /etc/hosts
echo "192.168.253.132 node1" >> /etc/hosts
echo "192.168.253.133 node2" >> /etc/hosts

4、禁用SELINUX

SELinux或Security-Enhanced Linux是提供访问控制安全策略的机制或安全模块,简而言之,它是一项功能或服务,用于将用户限制为系统管理员设置的某些政策和规则。 这里需要将SELINUX设为permissive 模式(相当于禁用)

# 临时禁用,重启后复原,也可以用 setenforce Permissive 命令,效果是一样的
setenforce 0# 永久禁用,
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5、关闭swap分区

swap分区是虚拟内存,Swap分区在系统的物理内存不够用的时候,把硬盘内存中的一部分空间释放出来,以供当前运行的程序使用,关闭虚拟内存可以提高k8s的性能;

#临时关闭swap分区, 重启失效;
swapoff -a#永久关闭swap分区
sed -ri 's/.*swap.*/#&/' /etc/fstab

6、允许 iptable 检查桥接流量

网卡上可能有ipv6的流量,将ipv6的流量桥接到ipv4网卡上,方便统计;
将以下内容一起复制到命令行执行

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

让以上配置生效

sudo sysctl --system

7、禁用iptables 和 防火墙服务

k8s和docker在运行中会产生大量的iptables规则, 为了不让系统规则跟它们混淆,直接关闭系统的规则

# 关闭iptables,没这个服务可忽略
systemctl stop iptables
systemctl disable iptables# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

8、配置ipvs功能

在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

1.安装ipset和ipvsadm
yum install ipset ipvsadmin -y
2.添加需要加载的模块写入脚本文件
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
3.为脚本添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
4.执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
5.查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

二、安装docker

首先,我们需要在每台服务器上都安装docker的运行环境;

1、卸载原docker

# 查看已安装的docker
yum list installed | grep docker# 卸载docker相关组件
yum -y remove docker*# 除了docker之外,也需要将containerd.io 卸载,这是容器相关的组件
docker -y remove containerd.io.x86_64# 删除docker目录
rm -rf /var/lib/docker

2、使用国内的阿里云镜像仓

# 使用国内的阿里云镜像仓库-- 比较快 ,建议使用这个yum-config-manager \--add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3、安装指定版本的docker

yum install \docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 -y

4、使用systemd代替cgroupfs、以及配置仓库镜像地址

docker在默认情况下使用Cgroup Driver 为cgroupfs,而k8s推荐使用systemd来代替cgroupfs,所以在/etc/docker/daemon.json内加入以下内容,如果没有这个文件,手动创建一个

{"registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}

5、启动docker

# 第一种:启动docker
systemctl start docker# 第二种:启动docker并设为开机启动
systemctl enable docker --now

三、安装k8s三大件

以下操作需要在所有的节点上进行

1、 安装 kubelet 、kubeadm、 kubectl

如果使用k8s官网上的地址,将会导致下载很慢, 所以这里建议使用阿里镜像安装

# 配置源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

说明

  • enabled=1 : 开启
  • gpgcheck=0 : 是否开启gpg签名,1开启,0关闭
  • repo_gpgcheck=0 : 是否检查元数据信息文件的签名信息与完整性,1开启,0关闭

2、安装三大件

--disableexcludes=kubernetes表示排除禁用

yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

3、添加配置

/etc/sysconfig/kubelet文件中加入以下内容

KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

说明

  • k8s也使用systemd来代替cgroupfs
  • k8s默认代理模式是iptables ,现改为ipvs,据说ipvs性能更高;也可以不设置

4、启动kubelet

# 开机自动启动
systemctl enable --now kubelet

四、使用kubeadm引导集群

理论上以下操作只需要在master节点进行

1、下载各个机器需要的集群

初始化主节点之前,需要先下载一些组件,但是这些组件都在官网上,所以我们国内是无法下载的,所以一我们将官网的镜像改成阿里云的镜像,以下命令执行后会在当前目录得到一个 images.sh 的脚本文件,执行后会在docker中下载一些需要的组件,理论上这些只需要在主节点上执行,但是为了保险起见,我们在所有节点都安装一下

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF

设置可执行权限并执行脚本

chmod +x ./images.sh && ./images.sh

五、主节点环境准备

注意:以下的配置和安装是主节点才有的东西,所以只在master节点上进行,不要在work节点上操作;

1、初始化主节点

使用init组件快速初始化一个主节点

kubeadm init \
--apiserver-advertise-address=192.168.253.131 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

说明

  • --apiserver-advertise-address :api-server分发地址,这里一定要写主节点的ip地址
  • --control-plane-endpoint:控制端域名,这里写刚刚在hosts文件配置的主节点域名
  • --image-repository :镜像仓库,使用国内镜像会快一些
  • --kubernetes-version :版本号,跟着k8s的版本走就行,也可以自定义
  • --service-cidr :service网络范围
  • --pod-network : pod网络范围
  • --ignore-preflight-errors=all :忽略检查错误,Kubernetes对GPU要求至少是2核,2G内存。如果你是只有1核的cpu或者只有1G内存的话就无法继续安装,all表示忽略所有检查;这样就算你的单核cpu也可以安装k8s,因为博主的云服务器是单核2G的,所以需要加上这个配置

安装过程需等待几分钟, 如显示以下信息则表示安装成功,但是这还没完,如果要使用k8s集群,还需要按照下面的步骤继续执行命令;

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \--discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \--discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72

2、初始化后续

根据上面的提示继续进行配置,如果需要使用集群,还需要执行以下命令,
记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、work节点加入集群(安装令牌,只在工作节点执行)

如果要在集群中加入工作节点,那么需要在工作节点执行以下命令,这些是集群的令牌
记住,这里的命令是从第一步初始化成功后拷贝过来的命令,应该拷贝你的命令来执行

以下操作只在work节点执行即可

kubeadm join cluster-endpoint:6443 --token ppwpeo.286k19gvjdlelen8 \--discovery-token-ca-cert-hash sha256:1e402bf817b1f8f2ade7aeb0a702c389903a96e72724517409793e7b4904ee72

加入后我们在主节点执行kubectl get nodes命令,可以看到,除了主节点之外,还有2个工作节点,在看它们的状态都是NotReady(未准备好)的,因为还没安装网络插件,这刚好是我们下一步要做的事;

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE    VERSION
master   NotReady   control-plane,master   4h5m   v1.20.9
node1    NotReady   <none>                 98m    v1.20.9
node2    NotReady   <none>                 7s     v1.20.9

需要注意的是,这个令牌是24小时内有效,超过24小时就要重新生成了;在master节点执行以下命令即可创建一个新的令牌

# 这个命令一定要在主节点执行才能生成令牌
kubeadm token create --print-join-command

4、安装pod网络插件(只在主节点执行)

以下操作只在master节点执行即可

网络插件有好几种,可以翻墙的童鞋可以通过官网下载配置进行安装:https://kubernetes.io/docs/concepts/cluster-administration/addons/

在这里我们使用calico插件进行安装

# 下载 calico-etcd.yaml 到当前目录
wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml#因为是国外网站,考虑到有些童鞋无法访问,现将calico.yaml的文件内容放在结尾处,需要的童鞋自行复制到文件即可 # 加载网络插件
kubectl apply -f calico-etcd.yaml

安装成功后,会显示以下信息表示安装完成

[root@master ~]# kubectl apply -f calico-etcd.yaml
secret/calico-etcd-secrets configured
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers unchanged
poddisruptionbudget.policy/calico-kube-controllers created

再过几分钟,就可以看到所有的节点都已经ready了

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE    VERSION
master   Ready    control-plane,master   5h5m   v1.20.9
node1    Ready    <none>                 158m   v1.20.9
node2    Ready    <none>                 59m    v1.20.9

六、自运行演示

k8s集群里面最厉害的一点就是自运行,也就是说当集群中的机器因为某些原因宕机了,重启之后它会自动运行起来,保证高可用,下面我们就来测试一下, 输入以下命令重启三台服务器

reboot

过了几分钟后,我们发现k8s机器已经恢复好了,都是已就绪(Ready)状态,这就是自运行

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   15h   v1.20.9
node1    Ready    <none>                 13h   v1.20.9
node2    Ready    <none>                 11h   v1.20.9

七、创建一个nginx

执行以下命令即可(在master节点执行)

# 部署nginx,创建名为nginx的pob控制器
kubectl create deployment nginx --image=nginx:1.14-alpine# 暴露端口
kubectl expose deployment nginx --port=80 --type=NodePort

执行后发现,容器正在创建中,ContainerCreating表示容器正在创建中

^C[root@master ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
nginx-65c4bffcb6-q64cf   0/1     ContainerCreating   0          2m47s

在等个几分钟就创建好了,Running表示容器正在运行中

[root@master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-65c4bffcb6-q64cf   1/1     Running   0          7h44m

接下来看看映射的端口是哪一个,由下面的命令结果可以看到,端口号为30115

[root@master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        27h
nginx        NodePort    10.96.149.130   <none>        80:30115/TCP   7h43m

因为我们的集群有三个,所以我们访问这三个ip + 端口都是可以访问nginx的

# 以下三个链接都能访问nignx
http://192.168.253.131:30115/
http://192.168.253.132:30115/
http://192.168.253.133:30115/

既然已经部署好了,又能访问,那么这个nginx部署到哪个节点上了呢?有2种方法可以看到

第一种、可以用describe指令看到

kubectl describe pod nginx-65c4bffcb6-q64cf# 在最尾部可以看到`Events`的事件信息,带`#`号是博主加上去的注释内容Events:Type    Reason     Age   From               Message----    ------     ----  ----               -------# 拉取nginx镜像Normal  Pulling    33s   kubelet            Pulling image "nginx:1.14-alpine"# 已成功将 default/nginx-65c4bffcb6-q64cf 分配给节点node2 ,default是namespace,nginx-65c4bffcb6-q64cf 是podNormal  Scheduled  51s   default-scheduler  Successfully assigned default/nginx-65c4bffcb6-q64cf to node2# nginx镜像拉取成功,拉取时长:25.96447344秒Normal  Pulled     2s    kubelet            Successfully pulled image "nginx:1.14-alpine" in 25.96447344s# 创建nginx的容器Normal  Created    50s   kubelet            Created container nginx# 运行nginx容器Normal  Started    50s   kubelet            Started container nginx

第二种、通过-o wide指令查看

kubectl get pod -o wide -n default

执行后结果如下,可以看到NODE那一栏就是节点的名称了

NAME                     READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
nginx-65c4bffcb6-fdjhv   1/1     Running   0          4h53m   192.168.104.8   node2   <none>           <none>

整个安装过程比较简单,但是一步都不能错,只要前面错一步就会导致k8s集群运行过程中会出现各种各样的问题!

八、calico.yaml

此内容根据链接下载得来:wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml;

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Typha is disabled.typha_service_name: "none"# Configure the backend to use.calico_backend: "bird"# Configure the MTU to useveth_mtu: "1440"# The CNI network configuration to install on each node.  The special# values in this config will be automatically populated.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico","log_level": "info","datastore_type": "kubernetes","nodename": "__KUBERNETES_NODE_NAME__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: felixconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: FelixConfigurationplural: felixconfigurationssingular: felixconfiguration
---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamblocks.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMBlockplural: ipamblockssingular: ipamblock---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: blockaffinities.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BlockAffinityplural: blockaffinitiessingular: blockaffinity---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamhandles.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMHandleplural: ipamhandlessingular: ipamhandle---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ipamconfigs.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPAMConfigplural: ipamconfigssingular: ipamconfig---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgppeers.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPPeerplural: bgppeerssingular: bgppeer---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: bgpconfigurations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: BGPConfigurationplural: bgpconfigurationssingular: bgpconfiguration---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: ippools.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: IPPoolplural: ippoolssingular: ippool---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: hostendpoints.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: HostEndpointplural: hostendpointssingular: hostendpoint---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: clusterinformations.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: ClusterInformationplural: clusterinformationssingular: clusterinformation---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworkpolicies.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkPolicyplural: globalnetworkpoliciessingular: globalnetworkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: globalnetworksets.crd.projectcalico.org
spec:scope: Clustergroup: crd.projectcalico.orgversion: v1names:kind: GlobalNetworkSetplural: globalnetworksetssingular: globalnetworkset---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networkpolicies.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkPolicyplural: networkpoliciessingular: networkpolicy---apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:name: networksets.crd.projectcalico.org
spec:scope: Namespacedgroup: crd.projectcalico.orgversion: v1names:kind: NetworkSetplural: networksetssingular: networkset
---
# Source: calico/templates/rbac.yaml# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
rules:# Nodes are watched to monitor for deletions.- apiGroups: [""]resources:- nodesverbs:- watch- list- get# Pods are queried to check for existence.- apiGroups: [""]resources:- podsverbs:- get# IPAM resources are manipulated when nodes are deleted.- apiGroups: ["crd.projectcalico.org"]resources:- ippoolsverbs:- list- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete# Needs access to update clusterinformations.- apiGroups: ["crd.projectcalico.org"]resources:- clusterinformationsverbs:- get- create- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: calico-node
rules:# The CNI plugin needs to get pods, nodes, and namespaces.- apiGroups: [""]resources:- pods- nodes- namespacesverbs:- get- apiGroups: [""]resources:- endpoints- servicesverbs:# Used to discover service IPs for advertisement.- watch- list# Used to discover Typhas.- get- apiGroups: [""]resources:- nodes/statusverbs:# Needed for clearing NodeNetworkUnavailable flag.- patch# Calico stores some configuration information in node annotations.- update# Watch for changes to Kubernetes NetworkPolicies.- apiGroups: ["networking.k8s.io"]resources:- networkpoliciesverbs:- watch- list# Used by Calico for policy information.- apiGroups: [""]resources:- pods- namespaces- serviceaccountsverbs:- list- watch# The CNI plugin patches pods/status.- apiGroups: [""]resources:- pods/statusverbs:- patch# Calico monitors various CRDs for config.- apiGroups: ["crd.projectcalico.org"]resources:- globalfelixconfigs- felixconfigurations- bgppeers- globalbgpconfigs- bgpconfigurations- ippools- ipamblocks- globalnetworkpolicies- globalnetworksets- networkpolicies- networksets- clusterinformations- hostendpoints- blockaffinitiesverbs:- get- list- watch# Calico must create and update some CRDs on startup.- apiGroups: ["crd.projectcalico.org"]resources:- ippools- felixconfigurations- clusterinformationsverbs:- create- update# Calico stores some configuration information on the node.- apiGroups: [""]resources:- nodesverbs:- get- list- watch# These permissions are only requried for upgrade from v2.6, and can# be removed after upgrade or on fresh installations.- apiGroups: ["crd.projectcalico.org"]resources:- bgpconfigurations- bgppeersverbs:- create- update# These permissions are required for Calico CNI to perform IPAM allocations.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinities- ipamblocks- ipamhandlesverbs:- get- list- create- update- delete- apiGroups: ["crd.projectcalico.org"]resources:- ipamconfigsverbs:- get# Block affinities must also be watchable by confd for route aggregation.- apiGroups: ["crd.projectcalico.org"]resources:- blockaffinitiesverbs:- watch# The Calico IPAM migration needs to get daemonsets. These permissions can be# removed if not upgrading from an installation using host-local IPAM.- apiGroups: ["apps"]resources:- daemonsetsverbs:- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxhostNetwork: truetolerations:# Make sure calico-node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0priorityClassName: system-node-criticalinitContainers:# This container performs upgrade from host-local IPAM to calico-ipam.# It can be deleted if this is a fresh installation, or if you have already# upgraded to use calico-ipam.- name: upgrade-ipamimage: calico/cni:v3.10.4command: ["/opt/cni/bin/calico-ipam", "-upgrade"]env:- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backendvolumeMounts:- mountPath: /var/lib/cni/networksname: host-local-net-dir- mountPath: /host/opt/cni/binname: cni-bin-dir# This container installs the CNI binaries# and CNI network config file on each node.- name: install-cniimage: calico/cni:v3.10.4command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_config# Set the hostname based on the k8s node name.- name: KUBERNETES_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# CNI MTU Config variable- name: CNI_MTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# Prevents the container from sleeping forever.- name: SLEEPvalue: "false"volumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes# to communicate with Felix over the Policy Sync API.- name: flexvol-driverimage: calico/pod2daemon-flexvol:v3.10.4volumeMounts:- name: flexvol-driver-hostmountPath: /host/drivercontainers:# Runs calico-node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: calico/node:v3.10.4env:# Use Kubernetes API as the backing datastore.- name: DATASTORE_TYPEvalue: "kubernetes"# Wait for the datastore.- name: WAIT_FOR_DATASTOREvalue: "true"# Set based on the k8s node name.- name: NODENAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"# Enable IPIP- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name: calico-configkey: veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:exec:command:- /bin/calico-node- -felix-live- -bird-liveperiodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:exec:command:- /bin/calico-node- -felix-ready- -bird-readyperiodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /run/xtables.lockname: xtables-lockreadOnly: false- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- name: policysyncmountPath: /var/run/nodeagentvolumes:# Used by calico-node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the directory for host-local IPAM allocations. This is# used when upgrading from host-local to calico-ipam, and can be removed# if not using the upgrade-ipam init container.- name: host-local-net-dirhostPath:path: /var/lib/cni/networks# Used to create per-pod Unix Domain Sockets- name: policysynchostPath:type: DirectoryOrCreatepath: /var/run/nodeagent# Used to install Flex Volume Driver- name: flexvol-driver-hosthostPath:type: DirectoryOrCreatepath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system---
# Source: calico/templates/calico-kube-controllers.yaml# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllers
spec:# The controllers can only have a single active instance.replicas: 1selector:matchLabels:k8s-app: calico-kube-controllersstrategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:nodeSelector:beta.kubernetes.io/os: linuxtolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerspriorityClassName: system-cluster-criticalcontainers:- name: calico-kube-controllersimage: calico/kube-controllers:v3.10.4env:# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: node- name: DATASTORE_TYPEvalue: kubernetesreadinessProbe:exec:command:- /usr/bin/check-status- -r---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml---
# Source: calico/templates/calico-typha.yaml---
# Source: calico/templates/configure-canal.yaml

centos7.9安装k8s相关推荐

  1. k8s学习一:centos7单机安装k8s

    初始安装 yum install -y etcd kubernetes vim /etc/sysconfig/docker # 内容改为如下 OPTIONS='--selinux-enabled=fa ...

  2. (亲测无坑)Centos7.x使用kubeadm安装K8s集群1.15.0版本

    基础环境配置 三台Centos7.x的服务器,主节点 cpu >=2,node节点>=1 注:(上述cpu为最低配置,否则集群安装部署会报错,无法启动,对其他硬件无硬性要求) 以下操作若无 ...

  3. VMware下centos7安装k8s(Kubernetes)多master集群

    上一节:VMware下centos7安装k8s(Kubernetes)集群 1.使用MobaXterm打开多个窗口,进行多窗口同时编辑,已提前改好IP和hostname. 2.修改hosts,用vim ...

  4. CentOS7安装k8s服务--Master节点和Node节点

    CentOS7安装k8s服务 需求是在六台服务器上安装k8s服务,三台master节点,三台node节点,服务器的操作系统是BC-Linux,就当Centos用吧. 先给出大佬的文章(我就是看他的): ...

  5. CentOS7.9 通过 kubeadm1.23.5 安装 K8S

    CentOS7.9 通过 kubeadm1.23.5 安装 K8S 安装前需知 一.环境说明 1.1. 主机配置 1.2. 主机名规划 1.3. 软件版本 二.准备工作 2.1. SSH配置 2.2. ...

  6. centos7 Kubeadm安装配置K8S 及Dashboard外部服务

    环境: Kubernetes Master节点:192.168.0.47 Kubernetes node1节点:192.168.0.33 Kubernetes node2节点:192.168.0.37 ...

  7. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 文章目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 k ...

  8. ARM架构服务器centos7.4上yum安装k8s教程

    1.环境说明 [root@k8s-master ~]# uname -a Linux slave1 4.11.0-22.el7a.aarch64 #1 SMP Sun Sep 3 13:39:10 C ...

  9. 在CENTOS7下安装kubernetes填坑教程(原创)

    kubernetes(以下简称"k8s")目前是公认的最先进的容器集群管理工具,在1.0版本发布后,k8s的发展速度更加迅猛,并且得到了容器生态圈厂商的全力支持,这包括coreos ...

最新文章

  1. Perfect Security (01字典树删除点)
  2. 学英语必备的18条法则,建议收藏!
  3. esp32 Flash分区
  4. 【ios】NSMutableArray initWithContentOfFile 得到nil后无法进行addObject的问题
  5. 除了CRUD也要注意IO
  6. NSDictionary NSArray 转中文输出
  7. 史迪仔的原型_星际宝贝三个版本对比,莉罗抛弃史迪仔,童年真的回不去了
  8. stylus之其余参数(Rest Params)
  9. 运行第一个vue.js文件
  10. Spark提交 指定 kerberos 认证信息
  11. Navicat导入数据库数据结构sql报错datetime(0)
  12. java里有没有 0的使用_请问有没有人有零基础Java基础习题?
  13. 2021-2025年中国宠物美容台行业市场供需与战略研究报告
  14. php 背景图片缩放,PHP按原比率缩小图片并保留透明背景
  15. 特效编辑器开发手记1——令人蛋疼菊紧的Cocos2d-x动态改变粒子数
  16. 软件构造笔记——Rep Invariantand Abstraction Function
  17. cJSON Note(1):JSON数据结构
  18. tt作曲家简谱打谱软件_作曲家入门指南
  19. 合工大路强java第四次作业第5题
  20. 低代码助力生产管理:离散型制造业MES系统

热门文章

  1. 系统集成模拟3-55分
  2. 利用 canvas 实现圆形进度条,实现倒计时效果
  3. 2022-2028全球皱纹霜行业调研及趋势分析报告
  4. Debug一个ECC的ODP数据源
  5. ostringstream如何清空缓存
  6. xampp mysql远程_XAMPP mysql远程连接
  7. 三筒烘干机将煤渣变废为宝,逐渐实现可持续发展战略
  8. 距离产生的不是美,是小三
  9. “如果我是诸葛亮” —— 策略模式
  10. 分布式事务解决方案:XA规范