一. 安装要求

(1)多台服务器,操作系统 CentOS7.6-86_x64
(2)硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘40GB或更多
(3)可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
(4)集群中所有机器之间网络互通
(5)禁止swap分区

二. 准备环境(集群三台全部关闭)

主机名 IP地址
master1 192.168.199.131
node3 192.168.199.132
node4 192.168.199.133

1.关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

2. 关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

3. 关闭swap

swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

4. 设置主机名

hostnamectl set-hostname k8s-master1

三、开始部署

3.1 在master添加hosts

cat >> /etc/hosts << EOF
192.168.199.131 master1
192.168.199.132 node3
192.168.199.133 node4
EOF

3.2 将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

3.3 时间同步

yum install ntpdate -y
ntpdate time.windows.com

四、部署ETCD集群

Etcd 是一个分布式键值存储系统,Kubernetes 使用 Etcd 进行数据存储,所以先准备一个 Etcd 数据库,为解决 Etcd 单点故障,应采用集群方式部署,这里使用 3 台建立集群,可以坏一台,5台集群可以坏两台。

主机名 IP地址
etcd-1 192.168.199.131
etcd-2 192.168.199.132
etcd-3 192.168.199.133

4.1 准备cfssl证书生成工具

cfssl 是一个开源的证书管理工具,使用 json 文件生成证书,相比 openssl 更方便使用。可以用 Master 节点

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

4.2 生成etcd证书

(1)自签证书颁发机构(CA)
创建工作目录:

   mkdir -p ~/TLS/{etcd,k8s}cd TLS/etcd

4.2.1自签 CA:

cat > ca-config.json << EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOFcat > ca-csr.json << EOF
{"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}]
}
EOF


4.2.2生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls  *pemca-key.pem  ca.pem


(2)使用自签 CA 签发 Etcd HTTPS 证书
创建证书申请文件

cat > server-csr.json << EOF
{"CN": "etcd","hosts": ["192.168.199.130","192.168.199.131","192.168.199.132","192.168.199.133","192.168.199.134","192.168.199.135"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}]
}
EOF

注: hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,一个都不能少!为了方便后期扩容,可以多写几个预留的 IP。

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare serverls server*pem
server-key.pem  server.pem

4.3 etcd集群部署从 Github 下载二进制文件

ETCD下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

4.3.1 创建工作目录并解压二进制包

mkdir /opt/etcd/{bin,cfg,ssl} –p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin

4.3.2 创建 etcd 配置文件

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.199.131:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.199.131:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.199.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.199.131:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.199.131:2380,etcd-2=https://192.168.199.132:2380,etcd-3=https://192.168.199.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF


注释:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群

4.3.3 systemd 管理 etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

4.3.4 拷贝之前生成的证书放到etcd/ssl/文件夹

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd

4.3.5 将master主节点etct文件夹和etcd.service 拷贝到node1、node2

scp -r /opt/etcd/ root@192.168.199.132:/opt/
scp /usr/lib/systemd/system/etcd.service  root@192.168.199.132:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@192.168.199.133:/opt/
scp /usr/lib/systemd/system/etcd.service  root@192.168.199.133.73:/usr/lib/systemd/system/

4.3.6 启动并设置开机启动(从节点需要先启动,主节点才能启动)

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

分别node1、node2修改 etcd.conf 配置文件中的节点名称和当前服务 IP:

vi /opt/etcd/cfg/etcd.conf
[Member]
ETCD_NAME="etcd-2" # 修改此处,节点 2 改为 etcd-2,节点 3 改为 etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.199.132:2380" # 修改此处为当前服务器 IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.199.132:2379" # 修改此处为当前服务器 IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.199.132:2380" # 修改此处为当前
服务器 IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.199.132:2379" # 修改此处为当前服务器
IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.192.131:2380,etcd-2=https://192.168.199.132:2380,etcd-3=https://192.168.199.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

4.3.7 查看集群状态

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.199.131:2379,https://192.168.199.132:2379,https://192.168.199.133:2379" endpoint health

5. 安装docker 环境

5.1 二进制部署方式

https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin

5.2 systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

5.3 创建配置文件

mkdir /etc/dockercat > /etc/docker/daemon.json << EOF
{"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

registry-mirrors 阿里云镜像加速器

5.4 启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

六、部署Master (master节点)

6.1 生成 kube-apiserver 证书
6.1.1 自签证书颁发机构(CA)

cat > ca-config.json << EOF
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}}
}
EOF
cat > ca-csr.json << EOF
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}]
}
EOF

6.1.2 生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem
ca-key.pem  ca.pem

6.1.3 使用自签CA签发kube-apiserver HTTPS证书
6.1.3.1创建证书申请文件:

cd TLS/k8s
cat > server-csr.json << EOF
{"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","192.168.199.130","192.168.199.131","192.168.199.132","192.168.199.133","192.168.199.134","192.168.199.135","192.168.199.136","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF

6.1.3.2生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare serverls server*pem
server-key.pem  server.pem

6.1.4 从github上下载k8s安装包

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。

6.1.5 解压二进制包并复制

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

**6.1.6 部署kube-apiserver
6.1.6.1 创建配置文件 **

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.199.130:2379,https://192.168.199.131:2379,https://192.168.199.133:2379 \\
--bind-address=192.168.199.130 \\
--secure-port=6443 \\
--advertise-address=192.168.199.130 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admissionplugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF 保留换行符。

–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd 集群地址
–bind-address:监听地址
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service 虚拟 IP 地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用 RBAC 授权和节点自管理
–enable-bootstrap-token-auth:启用 TLS bootstrap 机制
–token-auth-file:bootstrap token 文件
–service-node-port-range:Service nodeport 类型默认分配端口范围
–kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
–tls-xxx-file:apiserver https 证书
–etcd-xxxfile:连接 Etcd 集群证书
–audit-log-xxx:审计日志

6.1.7 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

6.1.8 启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver 启用 TLS 认证后,Node 节点 kubelet 和 kube- proxy 要与 kube-apiserver 进行通信,必须使用 CA 签发的有效证书才可以,当 Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署。所以强烈建议在 Node 上使用这种方式,目前主要用于 kubelet,kube-proxy 还是由我们统一颁发一个证书。TLS bootstraping 工作流程:

6.1.9 创建上述配置文件中 token 文件

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:nodebootstrapper"
EOF

格式:token,用户名,UID,用户组 token 也可自行生成替换命令:

head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 4. systemd 管理 apiserver

6.1.10 systemd管理apiserver

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

6.1.11 启动并设置开机启动

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

6.1.12. 授权 kubelet-bootstrap 用户允许请求证书(如果不请求,kubelet服务无法启动)

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

6.2、部署kube-controller-manager
6.2.1 创建配置文件

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

–master:通过本地非安全本地端口 8080 连接 apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为 kubelet 颁发证书的 CA,与 apiserver 保持一致

6.2.2 systemd 管理 controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

6.2.3 启动并设置开机启动

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

6.3 部署 kube-scheduler
6.3.1创建配置文件

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

–master:通过本地非安全本地端口 8080 连接 apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)

6.3.2. systemd 管理 scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

6.3.3 启动并设置开机启动

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

6.3.4 查看集群状态
6.3.4.1 生成kubectl连接集群的证书:

cat > admin-csr.json <<EOF
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}]
}
EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

6.3.4.2 生成kubeconfig文件:

mkdir /root/.kubeKUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.199.130:6443"kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \--client-certificate=./admin.pem \--client-key=./admin-key.pem \--embed-certs=true \--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \--cluster=kubernetes \--user=cluster-admin \--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

6.3.4.3 所有组件启动成功,通过 kubectl 工具查看当前集群组件状态:

kubectl get cs


如上输出说明 Master 节点组件运行正常。

七、部署 Worker Node

7.1 创建工作目录并拷贝二进制文件
7.1.1 在node1,node2 创建工作目录:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

7.1.2从 master 节点拷贝文件到node1,node2:

scp kubelet kube-proxy root@192.168.199.131:/opt/kubernetes/bin/
scp kubelet kube-proxy root@192.168.199.132:/opt/kubernetes/bin/

7.2 部署 kubelet

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
–hostname-override:显示名称,集群中唯一
–network-plugin:启用 CNI –kubeconfig:空路径,会自动生成,后面用于连接 apiserver –bootstrap-kubeconfig:首次启动向 apiserver 申请证书
–config:配置参数文件
–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理 Pod 网络容器的镜像

7.4. 配置参数文件

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

7.5 生成 bootstrap.kubeconfig 文件(master节点生成)

KUBE_APISERVER="https://192.168.199.130:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与 token.csv 里保持一致生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

7.6 拷贝到配置文件路径(node1、node2)

scp bootstrap.kubeconfig  root:192.168.199.131:/opt/kubernetes/cfg

7.7 systemd 管理 kubelet

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

7.8 启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

7.9 批准 kubelet 证书申请并加入集群

7.9.1查看 kubelet 证书请求

kubectl get csr

7.9.2 批准申请

kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZK6M4G7bjhk8A

7.9.3 查看节点

kubectl get node


注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

7.10 部署 kube-proxy(两个节点都配置)

7.10.1 创建参数文件

at > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

7.10.2 配置参数文件

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.0.0.0/24
EOF

7.11 生成kube-proxy.kubeconfig文件(master节点)
7.11.1 生成kube-proxy证书
切换工作目录

cd TLS/k8s

7.11.2 创建证书请求文件

cat > kube-proxy-csr.json << EOF
{"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}]
}
EOF
#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxyls kube-proxy*pem
kube-proxy-key.pem  kube-proxy.pem

7.11.3 生成kube-proxy.kubeconfig文件

KUBE_APISERVER="https://192.168.199.130:6443"
kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \--client-certificate=./kube-proxy.pem \--client-key=./kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

拷贝到配置文件指定路径:

scp kube-proxy.kubeconfig root@192.168.199.131:/opt/kubernetes/cfg/
scp kube-proxy.kubeconfig root@192.168.199.1312/opt/kubernetes/cfg/

7.11.4 systemd 管理 kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

7.11.5 启动并设置开机启动

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

八 部署 CNI 网络

8.1 先准备好 CNI 二进制文件:
下载地址:

https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni- plugins-linux-amd64-v0.8.6.tgz

8.2 解压二进制包并移动到默认工作目录:

mkdir /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

8.3 部署 CNI 网络:
8.3.1、准备好CNI二进制文件:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kubeflannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

8.3.2 默认镜像地址无法访问,修改为 docker hub 镜像仓库。

kubectl apply -f kube-flannel.yml
kubectl get pods -n kube-system
kubectl get node

部署好网络插件,Node 准备就绪。
8.3.3 授权 apiserver 访问 kubelet

cat > apiserver-to-kubelet-rbac.yaml<< EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml

8.3.4 新增加 Worker Node
8.3.4.1 拷贝已部署好的 Node 相关文件到新节点
在 master 节点将 Worker Node 涉及文件拷贝到新节点 192.168.199.131/132

scp -r /opt/kubernetes root@192.168.199.131:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service  root@192.168.199.131:/usr/lib/systemd/system
scp -r /opt/cni/ root@192.168.199.131:/opt/
scp /opt/kubernetes/ssl/ca.pem root@192.168.199.131:/opt/kubernetes/ssl

8.3.4.2 删除 kubelet 证书和 kubeconfig 文件

rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

注:文件是证书申请审批后自动生成的,每个 Node 不同,必须删除重新生成。
8.3.4.3 . 修改主机名

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1

8.3.4.4. 启动并设置开机启动

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

8.3.4.5 在 Master 上批准新 Node kubelet 证书申请

kubectl get csr

查看 Node 状态

Kubectl get node

部署CNI网络插件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

默认镜像地址无法访问,sed命令修改为docker hub镜像仓库。
如果无法访问,在/etc/hosts文件增加解析

199.232.68.133 raw.githubusercontent.com
199.232.68.133 user-images.githubusercontent.com
199.232.68.133 avatars2.githubusercontent.com
199.232.68.133 avatars1.githubusercontent.com
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-2pc95 1/1 Running 0 72s

九. 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

访问地址:http://NodeIP:Port (node1、node2都可访问)

K8S搭建单Master集群(二进制部署方式)相关推荐

  1. Kubernetes搭建单master集群

    1.环境说明 Linux version 3.10.0-1160.el7.x86_64 mockbuild@x86-vm-26.build.eng.bos.redhat.com gcc version ...

  2. Kubeadm 在线快速部署 1.23 单master集群 【实验用】

    飞机票 1. 前置知识点 1.1 准备环境 1.2 操作系统初始化配置[所有节点] 2. 安装Docker/kubeadm/kubelet[所有节点] 2.1 安装Docker 2.2 添加阿里云YU ...

  3. k8s集群二进制部署 1.17.3

    K8s简介 Kubernetes(简称k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容 ...

  4. Kubernetes集群的部署方式及详细步骤

    一.部署环境架构以及方式 第一种部署方式 1.针对于 master 节点 将 API Server.etcd.controller-manager.schedule各组件进行 yum install. ...

  5. k8s安装部署步骤_30分钟无坑部署K8S单Master集群

    Jesse导读:11月9号,我在中国.NET开发者峰会(.NET Conf China 2019)上分享了之前ASP.NET Core和Kubernetes做微服务的经验,在10号的时候又联合张善友. ...

  6. 30分钟无坑部署K8S单Master集群

    Jesse导读:11月9号,我在中国.NET开发者峰会(.NET Conf China 2019)上分享了之前ASP.NET Core和Kubernetes做微服务的经验,在10号的时候又联合张善友. ...

  7. kubernetes单Master集群部署--Node节点部署组件(6)

    1.kubelet组件授权 Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时, ...

  8. 二进制_Kubernetes集群二进制部署

    一.环境规划 操作系统:CentOS7.4_x64 kubernetes安装目录:/opt/kubernetes 版本说明: Kubernetes:v1.9 Docker:v17.12.0-ce Et ...

  9. Kubernetes集群二进制部署flannel

    一.版本信息 名称 版本 kubernetes 1.22.5 etcd 3.5.2 docker 19.03.8 flannel 0.14.0 cni 0.8.6 CNI:容器网络接口(Contain ...

最新文章

  1. vue-router点击切换路由报错
  2. 【BIO】基于BIO实现简单动态HTTP服务器
  3. php 连接mysql 错误排查一例
  4. python软件下载安装win10-Python Win10版本下载
  5. superset中的json数据转csv
  6. java安装和环境配置
  7. Ubuntu 11.04 beta 2发布!
  8. 公安部身份证阅读器模块SAM通讯协议
  9. How Tomcat Works(十三)
  10. php post 传递数组参数,php提交post数组参数实例分析
  11. windows之解决VMware虚拟机经常性卡死
  12. 一、node.js的windows环境设置
  13. 因打印日志而引发的故障
  14. NodeJS对mysql数据库的简单操作
  15. WM8978音频模块梳理
  16. iOS开发工程师求贤贴
  17. 笔试题算法系列(五)百度2017买帽子
  18. h5底部输入框被键盘遮挡_搜遍整个谷歌, 只有我是在认真解决安卓端hybrid app键盘遮挡输入框的问题...
  19. win10家庭版解决“管理员已阻止你运行此应用”
  20. 记腾讯的暑期实习面试

热门文章

  1. html身高的代码,html个人简历的代码-20210412071827.docx-原创力文档
  2. Python的IPy模块
  3. 大型软件交付项目注意事项53条
  4. 能力重构|能力重塑|能力再造
  5. Python-OpenCV图像水平/垂直/水平垂直翻转
  6. 关于java的说法错误的是,【单选题】关于Java程序的构造方法,说法错误的是( )。...
  7. 父亲节:再见,总有一天
  8. 菲氏微积分的徒子徒孙,现在该反思自己了!
  9. 买200元送100元,打几折?
  10. imageNet 的 top1-error和 top5-accuracy