RKE2安装kubernetes(2)

环境准备

  • 修改主机名

    hostnamectl set-hostname rke2-1 && bash
    
  • 系统版本

    [root@rke2-4 ~]# uname -a
    Linux rke2-4 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
    [root@rke2-4 ~]# cat /etc/redhat-release
    CentOS Linux release 7.4.1708 (Core)
    
  • 配置hosts解析

    cat >> /etc/hosts << EOF
    192.168.3.131  rke2-1
    192.168.3.132  rke2-2
    192.168.3.133  rke2-3
    192.168.3.134  rke2-4
    EOF
  • 关闭防火墙与selinux

    systemctl stop firewalld
    systemctl disable firewalldsed -i 's/SELINUX=enforcing/SELINUX=disabled/'  /etc/selinux/config
    setenforce 0
    
  • 关闭 NetworlManager

    systemctl  stop firewalld
    systemctl  disable  firewalld
    
  • 下载常用工具,修改yum源

    yum install -y ntpdate vim wget tree httpd-tools telnet  lrzsz  net-tools bridge-utils  unzipcurl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum clean all && yum makecache
  • 同步时间

    ln -sf /usr/share/zoneinfo/Asia/Shanghai  /etc/localtime
    ntpdate -u ntp.aliyun.com && date
    
  • 修改内核参数

    cat <<EOF >>  /etc/sysctl.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward=1
    EOF# 自动加载br_netfilter(网络过滤器) 模块
    modprobe br_netfilter#sysctl命令动态的修改内核参数
    sysctl -p /etc/sysctl.conf# -p:从配置文件“/etc/sysctl.conf”加载内核参数设置
    # modprobe 自动处理可载入模块
    
  • 开放9345端口

    TCP的端口,让master与master,master与work节点的通信
    

安装rke2

参考官方地址:https://docs.rke2.io/install/quickstart/

安装服务器节点

RKE2提供了一个安装脚本,这是一种基于systemd的系统上将其安装为服务的便捷方式。此脚本可以从 https://get.rke2.io 获得,要使用此方法安装RKE2 ,执行以下操作

1.运行安装程序,将rke2-server 服务和rke2 二进制文件安装到机器上
curl -sfL  https://get.rke2.io   | sh -2.开启rke2-server 服务
systemctl enable rke2-server3.启动服务
systemctl start rke2-server4.查看日志
journalctl -fu rk2-server.service5.启动后会生成如下文件:
[root@rke2-1 ~]# ll  /var/lib/rancher/rke2/
total 4
drwxr-xr-x. 7 root root 4096 Sep  8 15:54 agent
lrwxrwxrwx  1 root root   58 Sep  8 16:06 bin -> /var/lib/rancher/rke2/data/v1.21.4-rke2r2-3a2840eb67e1/bin
drwxr-xr-x. 3 root root   41 Sep  8 15:54 data
drwx------. 7 root root   99 Sep  8 16:05 server[root@rke2-1 ~]# cd  /var/lib/rancher/rke2/bin/
[root@rke2-1 bin]# ll
total 276740
-rwxr-xr-x. 1 root root  34902712 Sep  8 15:54 containerd  # 容器运行时
-rwxr-xr-x. 1 root root   6636544 Sep  8 15:54 containerd-shim
-rwxr-xr-x. 1 root root  11068832 Sep  8 15:54 containerd-shim-runc-v1
-rwxr-xr-x. 1 root root  11085408 Sep  8 15:54 containerd-shim-runc-v2
-rwxr-xr-x. 1 root root  23656944 Sep  8 15:54 crictl  # 操作containerdd 命令
-rwxr-xr-x. 1 root root  19651576 Sep  8 15:54 ctr
-rwxr-xr-x. 1 root root  48239168 Sep  8 15:55 kubectl
-rwxr-xr-x. 1 root root 116760352 Sep  8 15:55 kubelet
-rwxr-xr-x. 1 root root  11044080 Sep  8 15:55 runc  # run容器的一个程序
-rwxr-xr-x. 1 root root    313680 Sep  8 15:55 socat  # 用来给containerd提供端口映射服务# 生成了一个rke2.yaml 文件,完成kubernetes初始化后生产的admin.config,保存的是整个集群证书的一些信息,所以谁获得rke2.yaml文件就等于获得管理kubernetes集群的权限
[root@rke2-1 bin]# cd /etc/rancher/rke2/
[root@rke2-1 rke2]# ls -l
total 4
-rw-------. 1 root root 2977 Sep  8 16:06 rke2.yaml[root@rke2-1 rke2]# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/kubectl get node
NAME     STATUS   ROLES                       AGE   VERSION
rke2-1   Ready    control-plane,etcd,master   29m   v1.21.4+rke2r2[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/kubectl get pod -A
NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-rke2-1                         1/1     Running     0          29m
kube-system   etcd-rke2-1                                             1/1     Running     6          29m
kube-system   helm-install-rke2-canal-rtgsc                           0/1     Completed   0          29m
kube-system   helm-install-rke2-coredns-45w76                         0/1     Completed   0          29m
kube-system   helm-install-rke2-ingress-nginx-9gtsl                   0/1     Completed   0          29m
kube-system   helm-install-rke2-metrics-server-vwk77                  0/1     Completed   0          29m
kube-system   kube-apiserver-rke2-1                                   1/1     Running     0          29m
kube-system   kube-controller-manager-rke2-1                          1/1     Running     0          29m
kube-system   kube-proxy-rke2-1                                       1/1     Running     0          29m
kube-system   kube-scheduler-rke2-1                                   1/1     Running     0          29m
kube-system   rke2-canal-xwrfh                                        2/2     Running     0          27m
kube-system   rke2-coredns-rke2-coredns-7bb4f446c-zncz5               1/1     Running     0          27m
kube-system   rke2-coredns-rke2-coredns-autoscaler-7c58bd5b6c-xsh8s   1/1     Running     0          27m
kube-system   rke2-ingress-nginx-controller-b75m9                     1/1     Running     0          24m
kube-system   rke2-metrics-server-5df7d77b5b-d728t                    1/1     Running     0          25m

运行此安装后:

  • rke2-server 将安装该服务,该rke2-server服务将配置为:在节点重新启动或进程崩溃或被终止后自动重新启动
  • 其他使用程序将安装在/var/lib/rancher/rke2/bin/。他们包括:kubectl,crictl,和 ctr。注意:默认情况下这些不在您的路径下。
  • 两个清理脚本将安装到 /usr/local/bin/rke2 他们是rke2-killall.shrke2-uninstall.sh
  • 一个kubeconfig文件将被写入/etc/rancher/rke2/rke2.yaml
  • 可用于注册其他服务或代理节点的令牌将在/var/lib/rancher/rke2/server/node-token

**注意:**如果要添加其他服务器节点,则总数必须为奇数。需要奇数来维持选举人数,有关更多详细信息,请参阅高可用性文档。

手动配置rke2参数和一些设置

注:名字必须是config.yaml

[root@rke2-1 rke2]# cat config.yaml
token: K105a1bba0a11f93cf7231f0093d16d0d20156f8aa46cb1c5fc8ea8cc6df42a52df::server:5e9d82ee38c21ad5f794c5da30764de7
tls-san:- my-kubernetes-domain.com- another-kubernetes-domain.comnode-name: "rke2-1"#node-taint:
#  - "CriticalAddinsonly=true:NoExecute"node-label:- "node=Master"- "rke2-1=Master"

配置解释

# work 与 master之间通讯需要work提供master上的token 信息
token:
# 创建k8s集群后会生成一系列 tls 证书
tls-san:- my-kubernetes-domain.com- another-kubernetes-domain.com  # 都是集群的别名,是tls证书所认证的别名或域名,需要认证的别名罗列在这里就可以被tls认证# 节点的名字,会显示在get node 的信息
node-name: "rke2-1"# 有污点,只做master不做work,没有污点既是master也是work,可以通过kubectl命令修改
#node-taint:
#  - "CriticalAddinsonly=true:NoExecute"# label 也可以通过kubectl 添加或删除
node-label:- "node=Master"- "rke2-1=Master"

获取token

# 获取token,填入上面的配置文件
[root@rke2-1 ~]# cat /var/lib/rancher/rke2/server/node-token
K105a1bba0a11f93cf7231f0093d16d0d20156f8aa46cb1c5fc8ea8cc6df42a52df::server:5e9d82ee38c21ad5f794c5da30764de7# reload使其生效
[root@rke2-1 rke2]# systemctl daemon-reload[root@rke2-1 rke2]# systemctl restart rke2-server[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/kubectl get node
NAME     STATUS   ROLES                       AGE   VERSION
rke2-1   Ready    control-plane,etcd,master   55m   v1.21.4+rke2r2
[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/kubectl get pod -A
NAMESPACE     NAME                                                    READY   STATUS      RESTARTS   AGE
kube-system   cloud-controller-manager-rke2-1                         1/1     Running     1          55m
kube-system   etcd-rke2-1                                             1/1     Running     1          55s
kube-system   helm-install-rke2-canal-rtgsc                           0/1     Completed   0          55m
kube-system   helm-install-rke2-coredns-45w76                         0/1     Completed   0          55m
kube-system   helm-install-rke2-ingress-nginx-9gtsl                   0/1     Completed   0          55m
kube-system   helm-install-rke2-metrics-server-99vnw                  0/1     Completed   0          4s
kube-system   kube-apiserver-rke2-1                                   1/1     Running     1          55s
kube-system   kube-controller-manager-rke2-1                          1/1     Running     1          55m
kube-system   kube-proxy-rke2-1                                       1/1     Running     0          55m
kube-system   kube-scheduler-rke2-1                                   1/1     Running     1          55m
kube-system   rke2-canal-xwrfh                                        2/2     Running     0          53m
kube-system   rke2-coredns-rke2-coredns-7bb4f446c-zncz5               1/1     Running     0          53m
kube-system   rke2-coredns-rke2-coredns-autoscaler-7c58bd5b6c-xsh8s   1/1     Running     1          53m
kube-system   rke2-ingress-nginx-controller-b75m9                     1/1     Running     0          50m
kube-system   rke2-metrics-server-5df7d77b5b-d728t                    1/1     Running     1          51m

同样的方式配置其他master节点

scp /etc/rancher/rke2/config.yaml rke2-2:/etc/rancher/rke2/
scp /etc/rancher/rke2/config.yaml rke2-4:/etc/rancher/rke2/分别修改:node-name并添加如下:# 需要与server1 产生关系
server: https://192.168.3.131:9345# rk2-2如下:
[root@rke2-2 rke2]# cat config.yaml
server: https://192.168.3.131:9345
token: K105a1bba0a11f93cf7231f0093d16d0d20156f8aa46cb1c5fc8ea8cc6df42a52df::server:5e9d82ee38c21ad5f794c5da30764de7
tls-san:- my-kubernetes-domain.com- another-kubernetes-domain.comnode-name: "rke2-2"#node-taint:
#  - "CriticalAddinsonly=true:NoExecute"node-label:- "node=Master"- "rke2-2=Master"# rke2-4
[root@rke2-4 rke2]# cat /etc/rancher/rke2/config.yaml
server: https://192.168.3.131:9345
token: K105a1bba0a11f93cf7231f0093d16d0d20156f8aa46cb1c5fc8ea8cc6df42a52df::server:5e9d82ee38c21ad5f794c5da30764de7
tls-san:- my-kubernetes-domain.com- another-kubernetes-domain.comnode-name: "rke2-4"#node-taint:
#  - "CriticalAddinsonly=true:NoExecute"node-label:- "node=Master"- "rke2-4=Master"# 加载后使其生效
systemctl daemon-reload
systemctl restart rke2-server# 再次查看node
[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/kubectl -n kube-system get node
NAME     STATUS   ROLES                       AGE    VERSION
rke2-1   Ready    control-plane,etcd,master   37m    v1.21.4+rke2r3
rke2-2   Ready    control-plane,etcd,master   23m    v1.21.4+rke2r3
rke2-4   Ready    control-plane,etcd,master   118s   v1.21.4+rke2r3

安装worker节点

1.运行安装程序,将rke2-agent 服务 和 rke2 二进制文件安装到机器上
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -2.设置开机启动
systemctl enable rke2-agent.service3.配置rke2-agent服务
mkdir -p /etc/rancher/rke2
vim /etc/rancher/rke2/config.yamlconfig.yaml 的内容
server: /https://<server>:9345
tonken: <token from server node># rke2-3
scp /etc/rancher/rke2/config.yaml rke2-3:/etc/rancher/rke2/[root@rke2-3 rke2]# cat config.yaml
server: https://192.168.3.131:9345
token: K105a1bba0a11f93cf7231f0093d16d0d20156f8aa46cb1c5fc8ea8cc6df42a52df::server:5e9d82ee38c21ad5f794c5da30764de7
node-name: "rke2-3"
node-label:- "node=worker"- "rke2-3=worker"# 加载后使其生效
systemctl daemon-reload
注:该 rke2 server 进程在端口上监听 9345 要注册的新节点。kubernetes API 6443 仍然像往常一样在port 上提供服务。4.启动服务
systemctl start rke2-agent.service[root@rke2-3 ~]# systemctl status rke2-agent.service
● rke2-agent.service - Rancher Kubernetes Engine v2 (agent)Loaded: loaded (/usr/lib/systemd/system/rke2-agent.service; disabled; vendor preset: disabled)Active: active (running) since Mon 2021-09-13 15:46:35 CST; 12s agoDocs: https://github.com/rancher/rke2#readme5.查看日志
journalctl -fu rke2-agent 6.查看node
[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/kubectl -n kube-system get node -w
NAME     STATUS   ROLES                       AGE     VERSION
rke2-1   Ready    control-plane,etcd,master   89m     v1.21.4+rke2r3
rke2-2   Ready    control-plane,etcd,master   74m     v1.21.4+rke2r3
rke2-3   Ready    <none>                      6m24s   v1.21.4+rke2r3
rke2-4   Ready    control-plane,etcd,master   53m     v1.21.4+rke2r3

**注意:**每台机器必须有一个唯一的主机名。如果您的机器没有唯一的主机名,请node-nameconfig.yaml文件中设置参数并为每个节点提供一个具有有效且唯一主机名的值。

要阅读有关 config.yaml 文件的更多信息,请参阅安装选项文档。

其他


[root@rke2-1 rke2]# ls -l /run/k3s/containerd/containerd.sock
srw-rw---- 1 root root 0 Sep 13 14:31 /run/k3s/containerd/containerd.sock|
[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/crictl  --runtime-endpoint=unix:///run/k3s/containerd/containerd.sock ps
CONTAINER           IMAGE               CREATED             STATE               NAME                            ATTEMPT             POD ID
b210741aa5491       7589738b9ae11       2 hours ago         Running             coredns                         0                   643b9ef40c4b1
3d4c3184d1ff3       5aa19aa313a9b       2 hours ago         Running             autoscaler                      5                   6724f540c188c
c4ef09c03a22d       5d05c5a9b5533       2 hours ago         Running             metrics-server                  1                   8dfba29b6803a
686f9ae82f6d9       55e81dd7316be       2 hours ago         Running             cloud-controller-manager        2                   66ecf8d51a225
c5fbcfae8def6       9e2f766bd35d6       2 hours ago         Running             kube-scheduler                  2                   87a6b265d5da2
7f740352a479f       9e2f766bd35d6       2 hours ago         Running             kube-controller-manager         2                   c19561eddcf4b
0eb2344d4d26b       9e2f766bd35d6       2 hours ago         Running             kube-apiserver                  1                   afe17cea25ea0
929a20b5f356b       271c0a695260e       2 hours ago         Running             etcd                            1                   c46cf018a870a
4de1d88f8f423       fffb9e128464f       2 hours ago         Running             rke2-ingress-nginx-controller   0                   b82a44372ee28
2a95f5d414d64       7589738b9ae11       2 hours ago         Running             coredns                         0                   aadde4683420b
e30a24115a4c7       366c64051af85       2 hours ago         Running             kube-flannel                    0                   1a11ecf1b650c
d4aedfaf8ee17       736cae9d947ba       2 hours ago         Running             calico-node                     0                   1a11ecf1b650c
044e6e56b933c       9e2f766bd35d6       2 hours ago         Running             kube-proxy                      1         # 命令太长
[root@rke2-1 rke2]# mkdir -p /etc/rancher/rke2/.kube
[root@rke2-1 rke2]# ln -s /etc/rancher/rke2/rke2.yaml ~/.kube/config
[root@rke2-1 rke2]# ll  ~/.kube/config
lrwxrwxrwx 1 root root 27 Sep 13 16:36 /root/.kube/config -> /etc/rancher/rke2/rke2.yaml[root@rke2-1 rke2]# ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
[root@rke2-1 rke2]# chmod 600 ~/.kube/config[root@rke2-1 rke2]# /var/lib/rancher/rke2/bin/crictl  ps
CONTAINER           IMAGE               CREATED             STATE               NAME                            ATTEMPT             POD ID
b210741aa5491       7589738b9ae11       2 hours ago         Running             coredns                         0                   643b9ef40c4b1
3d4c3184d1ff3       5aa19aa313a9b       2 hours ago         Running             autoscaler                      5                   6724f540c188c
c4ef09c03a22d       5d05c5a9b5533       2 hours ago         Running             metrics-server                  1                   8dfba29b6803a
686f9ae82f6d9       55e81dd7316be       2 hours ago         Running             cloud-controller-manager        2                   66ecf8d51a225
c5fbcfae8def6       9e2f766bd35d6       2 hours ago         Running             kube-scheduler                  2                   87a6b265d5da2
7f740352a479f       9e2f766bd35d6       2 hours ago         Running             kube-controller-manager         2                   c19561eddcf4b
0eb2344d4d26b       9e2f766bd35d6       2 hours ago         Running             kube-apiserver                  1                   afe17cea25ea0
929a20b5f356b       271c0a695260e       2 hours ago         Running             etcd                            1                   c46cf018a870a
4de1d88f8f423       fffb9e128464f       2 hours ago         Running             rke2-ingress-nginx-controller   0                   b82a44372ee28
2a95f5d414d64       7589738b9ae11       2 hours ago         Running             coredns                         0                   aadde4683420b
e30a24115a4c7       366c64051af85       2 hours ago         Running             kube-flannel                    0                   1a11ecf1b650c
d4aedfaf8ee17       736cae9d947ba       2 hours ago         Running             calico-node                     0                   1a11ecf1b650c
044e6e56b933c       9e2f766bd35d6       2 hours ago         Running             kube-proxy

配置自己的镜像仓库地址

# 在/etc/rancher/rke2/ 下定义一个 registries.yaml
mirrors:myregistry.com:endpoint:- "https://myregistry.com:5000"
configs:"myregistry.com:5000"auth:username: xxxxpassword: xxxxtls:cert_file: /pathkey_file:ca_file:

Rke2 升级的问题

# server 升级
再执行
curl -sfL https://get.rke2.io | sh -# worker 升级
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -#server指定版本升级
curl -sfL https://get.rke2.io | INSTALL_RKE2_VERSION=vx.y.z sh -# worker 指定版升级
https://get.rke2.io | INSTALL_RKE2_TYPE="agent"   INSTALL_RKE2_VERSION=vx.y.z sh -

ETCD的问题

# rke2 上本身启动了一个ETCD快照功能,产生的快照文件在 如下目录
[root@rke2-1 ~]# ls -l /var/lib/rancher/rke2/server/db/snapshots/
total 0# 默认每12个小时生成当前机器的etcd快照,仅限有ETCD的master节点; 在每个master节点配置# 可以更改备份时间,在 config.yaml 添加如下两行:work节点无需添加 快照参数。
vi  /etc/rancher/rke2/config.yaml
etcd-snapshot-retention: 2
etcd-snapshot-schedule-cron: '*/2 * * * *'
kubelet-arg:- "eviction-hard=nodefs.available<1%,memory.available<10Mi"- "eviction-soft-grace-period=nodefs.available=30s,imagefs.available=30s"- "eviction-soft=nodefs.available<5%,imagefs.available<1%"注释:
# 快照文件个数,只保存两个,删除旧的保存新的
etcd-snapshot-retention: 2
# 与定时任务写法一样,分时日月周; default 是 '* */12 * * * '
etcd-snapshot-schedule-cron: '*/10 * * * *'
# 自定义快照文件存放位置
etcd-snapshot-dir: /xx/xxx/xxx# 自定义垃圾回收机制,添加到所所节点
kubelet-arg:- "eviction-hard=nodefs.available<1%,memory.available<10Mi"  # 硬策略- "eviction-soft-grace-period=nodefs.available=30s,imagefs.available=30s" # 硬策略- "eviction-soft=nodefs.available<5%,imagefs.available<1%"    # 软策略,可用文件系统小于百分之五,可用镜像文件系统小于1% 开始回收# reload 使其生效
systemctl daemon-reload
systemctl restart rke2-server# 查看是否生效
ps -ef | grep -i kubelet# 默认快照存储位置
ls /var/lib/rancher/rke2/server/db/snapshots/[root@rke2-1 ~]#  ls /var/lib/rancher/rke2/server/db/snapshots/
etcd-snapshot-rke2-1-1631600520  etcd-snapshot-rke2-1-1631600640

其他配置参考:https://docs.rke2.io/backup_restore/#options

RKE2安装kubernetes(2)相关推荐

  1. ubuntu多节点安装kubernetes

    在ubuntu上面多节点安装kubernetes,假设有两台机器 master:192.168.1.84 minion:192.168.1.83 You wil now need to configu ...

  2. CentOS 7.5 使用 yum 安装 Kubernetes 集群(二)

    一.安装方式介绍 1.yum 安装 目前CentOS官方已经把Kubernetes源放入到自己的默认 extras 仓库里面,使用 yum 安装,好处是简单,坏处也很明显,需要官方更新 yum 源才能 ...

  3. kubernetes1.8.4 安装指南 -- 6. 安装kubernetes master

    接下来安装kubernetes master的3个核心组件,分别是apiserver, controller-manager, scheduler. mkdir -p /etc/kubernetes/ ...

  4. 使用vagrant 安装kubernetes 无法下载box源解决办法

    2019独角兽企业重金招聘Python工程师标准>>> ###使用vagrant 安装kubernetes 无法下载box源解决办法 如果你准备使用vagrant 搭建kuberne ...

  5. 通过kubeadm安装kubernetes 1.7文档记录[docker容器方式]

    参照了网上N多文档,不一一列表,共享精神永存!!!! ================================================== 获取所有安装包 安装包分为两类,rpm安装包 ...

  6. 使用kubeadm安装kubernetes高可用集群

    kubeadm安装kubernetes高可用集群搭建  第一步:首先搭建etcd集群 yum install -y etcd 配置文件 /etc/etcd/etcd.confETCD_NAME=inf ...

  7. 如何在CentOS 7上安装Kubernetes Docker群集

    如何在CentOS 7上安装Kubernetes Docker群集 Kubernetes是一个开源平台,用于管理由Google开发的容器化应用程序.它允许您在集群环境中管理,扩展和自动部署容器化应用程 ...

  8. 使用kubeadm安装Kubernetes

    使用kubeadm安装Kubernetes kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm ...

  9. k8s从入门到放弃--使用kubeadm快速安装kubernetes

    kubeadm是Kubernetes官方提供的用于快速安装 Kubernetes 集群的工具,通过将集群的各个组件进行容器化安装管理,通过kubeadm的方式安装集群比二进制的方式安装要方便不少,但是 ...

最新文章

  1. 电话双音频拨码信号采集
  2. python集合加个逗号_8.Python集合与字符串
  3. 吸水间最低动水位标高_消防水泵-吸水管路设置要求
  4. linux 动态库构造函数,Linux共享库全局构造函数的相互依赖性
  5. [原创]纯CSS3打造的3D翻页翻转特效
  6. c++局部对象是什么_什么是Java内部类?
  7. 远程办公:如何招聘有自驱力的员工?
  8. matlab引言,MATLAB论文
  9. oracle多关键字查询,Oracle多关键字查询
  10. JAVA版村庄哨塔种子_我的世界:TOP18种子,刷怪笼、哨塔和村庄挤在一起,还不来试试?...
  11. 桌面下雪软件测试工程师,Xsnow - 在Ubuntu 18.04及更高版本的桌面上下雪
  12. 扫雷游戏网页版_借“买量”造爆款,《梦幻西游网页版》击穿H5游戏天花板
  13. mac升级编译器gcc方法
  14. 仙道经、清心诀、静心决、冰心诀、定心心经
  15. 射频day2:Zc,Zin;反射系数,驻波比
  16. java毕业设计诊所信息管理系统Mybatis+系统+数据库+调试部署
  17. 如何复制出计算机缓存中的歌曲,怎样提取电脑缓存中的文件 例如音乐
  18. 怎样打开xp默认共享?
  19. Airflow使用MsSqlHook与数据库交互
  20. 花了一个月时间,我成为了数据分析师

热门文章

  1. 双边滤波和交叉双边滤波
  2. BDBR和BD-PSNR说明
  3. Python提取多个docx文本内容
  4. a8处理器相当于骁龙几_千元机里的vivo Y3,处理器逊色不少,也有可圈可点的地方...
  5. vivado HLS 设计实现sBrief描述子
  6. CAN总线通信硬件原理图(采用TJA1050T CAN总线驱
  7. 【HAL】stm32F103+TJA1050+USBCAN CAN数据收发
  8. 1024程序员节 - 分享一个抖音视频下载程序
  9. nextcloud批量预先生成缩略图 - 使用Preview Generator插件
  10. 读《NFC:Arduino、Android与PhoneGap近场通信》有感