原创内容,转载请注明出处

博主地址:https://aronligithub.github.io/


kubernetes v1.11 二进制部署篇章目录

  • kubernetes v1.11 二进制部署

    • (一)环境介绍
    • (二)Openssl自签TLS证书
    • (三)master组件部署
    • (四)node组件部署

前言

在经过上一篇章关于kubernetes v1.11 生产环境 二进制部署 全过程,在部署kubernetes集群之后,其中nodes节点的状态依然是not readey的,因为网络的CNI插件Calico没有安装上,那么现在我就开始讲解部署Calico的网络转发服务了。


kubernetes集群的网络转发组件选择之争

在过去的一年多的生产环境,我一开始是使用flannel作为集群的docker容器之间的网络转发服务的,其实在使用的过程也没感觉有很大的不妥,但是在考虑了flannel与calico网络的性能差异之后,就决定转用calico的网络集成方式了。

那么为什么说flannelcalico的性能存在差异呢?

这个就要从我这边使用的应用场景以及flannelcalico的原理说起。

首先讲下我这边的通常生产环境

首先我们生产环境都是centos7裸机的环境,或者在阿里云、腾讯云直接购买多台centos7系统的服务器。
最能折腾的一种情况就是客户方提供的私有云,使用vsphere等集群构建的几台centos7虚拟机(这种情况下的服务器还有可能存在比较频繁的网络波动,这也是没辙的事情。通常客户方的私有云有时候出现这种情况,你也只能控制好心态,顶住前方源源不断提供过来的压力,默默排查一下集群异常的原因,最后通过etcd的心跳日志一看,存在很大的网络延迟,导致etcd集群服务出现异常,从而影响了整个kubernetes的集群服务)。

在这种环境下,flannel的作用

flannel原理图.png

在这个服务器的环境下,如果使用flannel组件,基本上就是只能用vxlan的方式来创建网络,封装二层转发帧,然后flannel的网关转发,然后拆帧,达到另一台服务器的docker容器。

在这个flannel的转发过程中是很占用服务器的CPU性能的,而且转发的效率也比较低(当然是相对而言的啦,我在生产环境运行了一年多,感觉要求不是很高的话也还行)。

当然,如果熟悉flannel组件的kubernetes玩家可能会说,你可以用host-gw的方式呀,这种效率会高很多呀。

这种方式的前提是所有服务器的网络都在一台交换机上,这样才能满足。

而当你的服务器如果是在阿里云、腾讯云等云平台,你就肯定要用vxlan的方式了,因为这些虚拟机大部分情况肯定不是在一台交换机的,host-gw的方式经过验证是用不成的。
而且flannel还带有一个缺点:

flannel比较依赖防火墙,有时候如果你是部署docker-ce版本的话,在查看iptables -L的情况的时候,就会发现docker转发的规则是Drop的,这时候需要你手动改为Accept状态才可以转发的。

而如果你直接使用yum install -y docker的版本则不会有这种情况防火墙默认docker转发为Drop的情况。

总而言之,flannel是非常依赖防火墙的转发规则的,而calico并不依赖。


Calico官网介绍的运行原理

Calico官网运行原理图

Calico官网介绍访问点击这里。

Calico利用Linux内核原生的路由和iptables防火墙功能。进出各个容器,虚拟机和主机的所有流量都会在路由到目标之前遍历这些内核规则。

calicoctl:允许您从简单的命令行界面实现高级策略和网络。

orchestrator插件:提供与各种流行协调器的紧密集成和同步。

key / value store:保存Calico的策略和网络配置状态。

calico/node:在每个主机上运行,​​从键/值存储中读取相关的策略和网络配置信息,并在Linux内核中实现它。

Dikastes / Envoy:可选的Kubernetes组件,通过相互TLS身份验证保护工作负载到工作负载的通信,并实施应用层策略。

Calico 与 flannel 有什么不同呢?

相信如果第一次看Calico原理的读者应该现在还感觉模模糊糊。那么我先简单通俗讲下几点:

  • Calicoflannel 都是需要访问etcd集群,来存储数据

    • flanneletcd需要创建虚拟网络,在后续从这个虚拟网段中分配dockerIP网段
    • Calico则无需在etcd提前创建虚拟网络,直接启用即可,因为网段会直接写在linux的路由表上
  • Calico有什么flannel没有的缺点呢?
    • Calico自动分配podIP地址是存储在etcd的,如果kubernetes需要更新版本,那么就要停某个node节点的服务;在停服务之间,如果没有将该node节点的pod删除掉,残留部分,这个时候残留的pod IP也会在etcd数据中残留,当你再启动的时候,很可能由于etcd还有残留的数据,导致这些残留的pod数据无法启动。
    • flannel目前使用上没有发现这种情况。
  • Calicoflannel转发的协议区别
    • Calico是基于BGP协议进行转发的,而flannel是通过封装数据帧进行转发的。

好了,到此为止也只能先说的大概,作为it人员说再多原理没有去实践运用什么的都是一场空,那么下面来看看官网介绍的如何快速入门。


Calico在Kubernetes上的快速入门

点击访问Calico官网的快速入门文档介绍

在文档介绍中找到kubernetes托管式安装的介绍说明

点击访问Calico托管式安装说明

可以从官网的介绍看出,Calico有多种托管方式,因为我们使用的二进制部署方式,所以首先可以抛开kubeadm这种不适合生产环境的方式。

标准托管方式kubernetes数据存储区分的方式有什么区别呢?

  • 标准托管方式: Calico与kubernetes共用一套etcd集群存储数据
  • kubernetes数据存储区分:Calico 与 kubernetes 分别单独各自使用一套etcd集群存储数据

目前最常用以及我线上在用的方式就是标准托管方式,因为维护一套etcd集群便于管理一些,降低了服务器维护的成本。

那么下面开始介绍如何以标准托管方式来安装部署。


标准托管方式安装

1. 查看kubernetes当前的安装状态

在前面篇章的部署过程中,kubernetes的nodes状态是not ready的,如下:

[root@server81 install_RAS_node]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-2pqfLLUo8vPQbGUyXqtN9AdDvDIymj9UrQynD59AgPA   7m        kubelet-bootstrap   Approved,Issued
node-csr-bYaJfolaFPO5HLXt96A7PHK8aKSGTQEQQdzl9lmHOOM   13m       kubelet-bootstrap   Approved,Issued
node-csr-on8Qaq30OUUTstM5rKb17OeWQOJV9s528yk_VSb-XzM   11m       kubelet-bootstrap   Approved,Issued
[root@server81 install_RAS_node]#
[root@server81 install_RAS_node]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    7m        v1.11.0
172.16.5.86   NotReady   <none>    7m        v1.11.0
172.16.5.87   NotReady   <none>    7m        v1.11.0
[root@server81 install_RAS_node]#
[root@server81 install_RAS_node]#

为什么node节点处于not ready的状态呢?我们可以通过日志观察一下:

[root@server81 install_RAS_node]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    7m        v1.11.0
172.16.5.86   NotReady   <none>    7m        v1.11.0
172.16.5.87   NotReady   <none>    7m        v1.11.0
[root@server81 install_RAS_node]#
[root@server81 install_RAS_node]# journalctl -f
-- Logs begin at Tue 2018-07-31 18:29:27 HKT. --
Sep 02 20:29:51 server81 kubelet[14005]: W0902 20:29:51.700528   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:29:51 server81 kubelet[14005]: E0902 20:29:51.700754   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:29:56 server81 kubelet[14005]: W0902 20:29:56.705337   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:29:56 server81 kubelet[14005]: E0902 20:29:56.706274   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:01 server81 kubelet[14005]: W0902 20:30:01.710599   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:01 server81 kubelet[14005]: E0902 20:30:01.711297   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:06 server81 kubelet[14005]: W0902 20:30:06.715059   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:06 server81 kubelet[14005]: E0902 20:30:06.715924   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:11 server81 kubelet[14005]: W0902 20:30:11.720491   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:11 server81 kubelet[14005]: E0902 20:30:11.721574   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Sep 02 20:30:16 server81 kubelet[14005]: W0902 20:30:16.726016   14005 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 20:30:16 server81 kubelet[14005]: E0902 20:30:16.726467   14005 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
^C
[root@server81 install_RAS_node]#

可以看到警告语句以及错误语句:

cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d

kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

说明就是kubernetes的CNI插件无法使用。那么,我们下面就开始一步步安装Calico吧。


2.根据官网的说明下载RABC文件

官网说明地址:https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/hosted


RBAC
如果在启用RBAC的群集上部署Calico,则应首先应用ClusterRole和ClusterRoleBinding规范:

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml

首先查看确认当前最新版本的RBAC的yaml文件

浏览yaml文件地址:https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml

# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: calico-kube-controllers
rules:- apiGroups:- ""- extensionsresources:- pods- namespaces- networkpolicies- nodesverbs:- watch- list- apiGroups:- networking.k8s.ioresources:- networkpoliciesverbs:- watch- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: calico-kube-controllers
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-kube-controllers
subjects:
- kind: ServiceAccountname: calico-kube-controllersnamespace: kube-system---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: calico-node
rules:- apiGroups: [""]resources:- pods- nodesverbs:- get---apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: calico-node
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: calico-node
subjects:
- kind: ServiceAccountname: calico-nodenamespace: kube-system

在这里已经得到所需要安装的yaml文件了,下面就安装看看。


在服务器上执行创建Calico的RBAC

[root@server81 install_Calico]# wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
--2018-09-02 20:54:35--  https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/rbac.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 159.65.5.64
Connecting to docs.projectcalico.org (docs.projectcalico.org)|159.65.5.64|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1247 (1.2K) [application/x-yaml]
Saving to: ‘rbac.yaml’100%[==================================================================================================================================================>] 1,247       --.-K/s   in 0s      2018-09-02 20:54:36 (48.3 MB/s) - ‘rbac.yaml’ saved [1247/1247][root@server81 install_Calico]#
[root@server81 install_Calico]# ls
calico.yaml  config_etcd_https.sh  rbac.yaml  simple
[root@server81 install_Calico]#
[root@server81 install_Calico]# kubectl apply -f rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
[root@server81 install_Calico]#

只要下载好yaml,然后不需要修改,直接执行即可安装的了。


3.安装Calico

安装Calico

安装Calico:

  1. 下载calico.yaml
  2. etcd_endpoints在提供的ConfigMap中配置以匹配您的etcd集群。

然后只需应用清单:

kubectl apply -f calico.yaml

注意

在运行上述命令之前,请确保将提供的ConfigMap配置为etcd集群的位置。


服务器执行安装Calico

[root@server81 install_Calico]# wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
--2018-09-02 21:18:32--  https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 159.65.5.64
Connecting to docs.projectcalico.org (docs.projectcalico.org)|159.65.5.64|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11829 (12K) [application/x-yaml]
Saving to: ‘calico.yaml.1’100%[==================================================================================================================================================>] 11,829      32.3KB/s   in 0.4s   2018-09-02 21:18:34 (32.3 KB/s) - ‘calico.yaml.1’ saved [11829/11829][root@server81 install_Calico]#
[root@server81 install_Calico]# ls
calico.yaml  calico.yaml.1  config_etcd_https.sh  rbac.yaml  simple
[root@server81 install_Calico]#
[root@server81 install_Calico]# vim calico.yaml.1
[root@server81 install_Calico]#
[root@server81 install_Calico]# cat calico.yaml.1
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
# This manifest includes the following component versions:
#   calico/node:v3.1.3
#   calico/cni:v3.1.3
#   calico/kube-controllers:v3.1.3# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Configure this with the location of your etcd cluster.etcd_endpoints: "http://127.0.0.1:2379"# Configure the Calico backend to use.calico_backend: "bird"# The CNI network configuration to install on each node.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.0","plugins": [{"type": "calico","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","log_level": "info","mtu": 1500,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca: ""   # "/calico-secrets/etcd-ca"etcd_cert: "" # "/calico-secrets/etcd-cert"etcd_key: ""  # "/calico-secrets/etcd-key"---# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:name: calico-etcd-secretsnamespace: kube-system
data:# Populate the following files with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# This self-hosted install expects three files with the following names.  The values# should be base64 encoded strings of the entire contents of each file.# etcd-key: null# etcd-cert: null# etcd-ca: null---# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:hostNetwork: truetolerations:# Make sure calico/node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0containers:# Runs calico/node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: quay.io/calico/node:v3.1.3env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set noderef for node controller.- name: CALICO_K8S_NODE_REFvalueFrom:fieldRef:fieldPath: spec.nodeName# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDRvalue: "192.168.0.0/16"- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalue: "1440"# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:httpGet:path: /livenessport: 9099periodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:httpGet:path: /readinessport: 9099periodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- mountPath: /calico-secretsname: etcd-certs# This container installs the Calico CNI binaries# and CNI network config file on each node.- name: install-cniimage: quay.io/calico/cni:v3.1.3command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_configvolumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir- mountPath: /calico-secretsname: etcd-certsvolumes:# Used by calico/node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/- name: etcd-certssecret:secretName: calico-etcd-secretsdefaultMode: 0400---# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''
spec:# The controllers can only have a single active instance.replicas: 1strategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersspec:# The controllers must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork: truetolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerscontainers:- name: calico-kube-controllersimage: quay.io/calico/kube-controllers:v3.1.3env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: policy,profile,workloadendpoint,nodevolumeMounts:# Mount in the etcd TLS secrets.- mountPath: /calico-secretsname: etcd-certsvolumes:# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/- name: etcd-certssecret:secretName: calico-etcd-secretsdefaultMode: 0400---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system
[root@server81 install_Calico]# 

修改calico的安装yaml文件中的endpoints

修改calico的安装yaml文件中的镜像为本地仓库的镜像

calico-node镜像

install-cni镜像

calico-kube-controllers镜像

首先yaml文件中有三个镜像,如下:

image: quay.io/calico/node:v3.1.3
image: quay.io/calico/cni:v3.1.3
image: quay.io/calico/kube-controllers:v3.1.3

将三个镜像的地址改为本地仓库的地址,如下:

image: 172.16.5.81:5000/calico/node:v3.1.3
image: 172.16.5.81:5000/calico/cni:v3.1.3
image: 172.16.5.81:5000/calico/kube-controllers:v3.1.3

为什么要改为本地镜像?
因为大部分这种官网的镜像都是需要翻墙下载的,而且速度也慢,还是先下载至本地仓库的好。

修改镜像的地址改为本地的仓库地址

:%s/quay.io/172.16.5.81:5000/g

批量修改仓库地址

修改完毕了之后,当然就是翻墙将原有的镜像下载后,然后push到我的本地仓库。


4.搭建本地仓库

对于构建本地仓库这块我在这篇章就不做太多的细节说明了,我使用自动构建以及推送镜像的脚本执行如下:

[root@server81 registry]# ./install_docker_registry.sh
2b0fb280b60d: Loading layer [==================================================>]  5.058MB/5.058MB
05d392f56700: Loading layer [==================================================>]  7.802MB/7.802MB
32f085a1e7bb: Loading layer [==================================================>]  22.79MB/22.79MB
e23ed9242cd7: Loading layer [==================================================>]  3.584kB/3.584kB
2bf5fdee0818: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: registry:2
Error response from daemon: No such container: registry
Error: No such container: registry
a806b2b5d1838918e50dd768d6ed9a8c44e07823f67fd2c0650f91fe550dda81
e17133b79956: Loading layer [==================================================>]  744.4kB/744.4kB
Loaded image: k8s.gcr.io/pause-amd64:3.1
Redirecting to /bin/systemctl restart docker.service
The push refers to repository [172.16.5.81:5000/pause-amd64]
e17133b79956: Pushed
3.1: digest: sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d size: 527
[root@server81 registry]#
[root@server81 registry]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
a806b2b5d183        registry:2          "/entrypoint.sh /etc…"   13 seconds ago      Up 10 seconds       0.0.0.0:5000->5000/tcp   registry
[root@server81 registry]#
[root@server81 registry]# docker images
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
172.16.5.81:5000/pause-amd64   3.1                 da86e6ba6ca1        8 months ago        742kB
k8s.gcr.io/pause-amd64         3.1                 da86e6ba6ca1        8 months ago        742kB
registry                       2                   751f286bc25e        13 months ago       33.2MB
[root@server81 registry]#
[root@server81 registry]# cat restartRegistry.sh
docker stop registry
docker rm registry
docker run -d -p 5000:5000 --name=registry --restart=always \--privileged=true \--log-driver=none \-v /root/registry/registrydata:/var/lib/registry \registry:2
[root@server81 registry]#

下一步,将calico的镜像推送到镜像仓库中。

[root@server81 registry]# ls
catImage.sh  image  install_docker_registry.sh  networkbox.tar  pause-amd64.tar  registry.tar  restartRegistry.sh
[root@server81 registry]#
[root@server81 registry]# cd image/
[root@server81 image]# ls
calico  coredns
[root@server81 image]#
"1.进入我之前下载好的calico镜像文件夹目录:"
[root@server81 image]# cd calico/
[root@server81 calico]# ls
cni.tar  controllers.tar  node.tar
[root@server81 calico]#
"2.分别将三个Calico镜像load进来:"
[root@server81 calico]# docker load -i cni.tar
0314be9edf00: Loading layer [==================================================>]   1.36MB/1.36MB
15db169413e5: Loading layer [==================================================>]  28.05MB/28.05MB
4252efcc5013: Loading layer [==================================================>]  2.818MB/2.818MB
76cf2496cf36: Loading layer [==================================================>]   3.03MB/3.03MB
91d3d3a16862: Loading layer [==================================================>]  2.995MB/2.995MB
18a58488ba3b: Loading layer [==================================================>]  3.474MB/3.474MB
8d8197f49da2: Loading layer [==================================================>]  27.34MB/27.34MB
7520364e0845: Loading layer [==================================================>]  9.216kB/9.216kB
b9d064622bd6: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: 172.16.5.81:5000/calico/cni:v3.1.3
[root@server81 calico]#
[root@server81 calico]# docker load -i controllers.tar
cd7100a72410: Loading layer [==================================================>]  4.403MB/4.403MB
2580685bfb60: Loading layer [==================================================>]  50.84MB/50.84MB
Loaded image: 172.16.5.81:5000/calico/kube-controllers:v3.1.3
[root@server81 calico]#
[root@server81 calico]# docker load -i node.tar
ddc4cb8dae60: Loading layer [==================================================>]   7.84MB/7.84MB
77087b8943a2: Loading layer [==================================================>]  249.3kB/249.3kB
c7227c83afaf: Loading layer [==================================================>]  4.801MB/4.801MB
2e0e333a66b6: Loading layer [==================================================>]  231.8MB/231.8MB
Loaded image: 172.16.5.81:5000/calico/node:v3.1.3
[root@server81 calico]#
"3.可以看出我已经将镜像的仓库地址都修改tag了,那么下面直接push就可以了"
[root@server81 calico]# docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
172.16.5.81:5000/calico/node               v3.1.3              7eca10056c8e        3 months ago        248MB
172.16.5.81:5000/calico/kube-controllers   v3.1.3              240a82836573        3 months ago        55MB
172.16.5.81:5000/calico/cni                v3.1.3              9f355e076ea7        3 months ago        68.8MB
172.16.5.81:5000/pause-amd64               3.1                 da86e6ba6ca1        8 months ago        742kB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        8 months ago        742kB
registry                                   2                   751f286bc25e        13 months ago       33.2MB
[root@server81 calico]#
"4.分开push三个Calico镜像至本地仓库"
[root@server81 calico]# docker push 172.16.5.81:5000/calico/node:v3.1.3
The push refers to repository [172.16.5.81:5000/calico/node]
2e0e333a66b6: Pushed
c7227c83afaf: Pushed
77087b8943a2: Pushed
ddc4cb8dae60: Pushed
cd7100a72410: Pushed
v3.1.3: digest: sha256:9871f4dde9eab9fd804b12f3114da36505ff5c220e2323b7434eec24e3b23ac5 size: 1371
[root@server81 calico]#
[root@server81 calico]# docker push 172.16.5.81:5000/calico/kube-controllers:v3.1.3
The push refers to repository [172.16.5.81:5000/calico/kube-controllers]
2580685bfb60: Pushed
cd7100a72410: Mounted from calico/node
v3.1.3: digest: sha256:2553b273c3fc3afbf624804f0a47fca452d53d97c2b3c8867c1fe629855ea91f size: 740
[root@server81 calico]#
[root@server81 calico]# docker push 172.16.5.81:5000/calico/cni:v3.1.3
The push refers to repository [172.16.5.81:5000/calico/cni]
b9d064622bd6: Pushed
7520364e0845: Pushed
8d8197f49da2: Pushed
18a58488ba3b: Pushed
91d3d3a16862: Pushed
76cf2496cf36: Pushed
4252efcc5013: Pushed
15db169413e5: Pushed
0314be9edf00: Pushed
v3.1.3: digest: sha256:0b4eb34f955f35f8d1b182267f7ae9e2be83ca6fe1b1ade63116125feb8d07b9 size: 2207
[root@server81 calico]#
"5.查看本地仓库现在有哪些镜像,可以看出calico的镜像都已经push上仓库了"
[root@server81 registry]# ./catImage.sh
{"repositories":["calico/cni","calico/kube-controllers","calico/node","pause-amd64"]}
[root@server81 registry]#
[root@server81 registry]# cat catImage.sh
curl http://localhost:5000/v2/_catalog
[root@server81 registry]#

好了,镜像都上传至本地仓库了,下一步就是要配置Calico访问etcd集群的TLS证书。


5.配置Calico访问https方式etcd集群的TLS证书

首先我们先看看该怎么配置Calico的TLS证书:

查看etcd的TLS文件的容器路径

注释Secret部分的TLS部分

部署calico-node设置hostpath的方式来挂载TLS文件

部署calico-kube-controllers设置hostpath挂载TLS文件

好了,基本把这两个位置的证书挂载修改一下就好了,下一步就是修改其他具体kubernetes的参数。


6.设置kubernetes集群的IP网段

7.拷贝并创建Calico用来访问etcd集群的TLS文件目录

"1.查看当前etcd集群的TLS证书"
[root@server81 install_Calico]# ls /etc/etcd/etcd
etcd.conf      etcd.conf.bak  etcdSSL/
[root@server81 install_Calico]# ls /etc/etcd/etcdSSL/
ca-config.json  ca-csr.json  ca.pem    etcd-csr.json  etcd.pem
ca.csr          ca-key.pem   etcd.csr  etcd-key.pem
[root@server81 install_Calico]#
"2.编写自动配置calico证书脚本"
[root@server81 install_Calico]# cat config_etcd_https.sh
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
etcdInfo=/opt/ETCD_CLUSER_INFOkubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLSetcdTLSDir=/etc/etcd/etcdSSL
etcdCaPem=$etcdTLSDir/ca.pem
etcdCaKeyPem=$etcdTLSDir/ca-key.pem
etcdPem=$etcdTLSDir/etcd.pem
etcdKeyPem=$etcdTLSDir/etcd-key.pemcalicoTLSDir=/etc/calico/calicoTLSETCD_ENDPOINT="`cat $etcdInfo | grep ETCD_ENDPOINT_2379 | cut -f 2 -d "="`"## function
function copy_etcd_ca(){mkdir -p $calicoTLSDircp $etcdCaPem $calicoTLSDir/etcd-cacp $etcdKeyPem $calicoTLSDir/etcd-keycp $etcdPem $calicoTLSDir/etcd-certls $calicoTLSDir
}copy_etcd_ca
[root@server81 install_Calico]#
[root@server81 install_Calico]# ./config_etcd_https.sh
etcd-ca  etcd-cert  etcd-key
[root@server81 install_Calico]#
"3.可以看到最后calicoTLS的三个证书文件"
[root@server81 install_Calico]# ls /etc/calico/calicoTLS/
etcd-ca  etcd-cert  etcd-key
[root@server81 install_Calico]#

8.最后修改好的calico.yaml

[root@server81 install_Calico]# cat calico.yaml
# Calico Version v3.1.3
# https://docs.projectcalico.org/v3.1/releases#v3.1.3
# This manifest includes the following component versions:
#   calico/node:v3.1.3
#   calico/cni:v3.1.3
#   calico/kube-controllers:v3.1.3# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:name: calico-confignamespace: kube-system
data:# Configure this with the location of your etcd cluster.#etcd_endpoints: "http://127.0.0.1:2379"etcd_endpoints: "https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379"# Configure the Calico backend to use.calico_backend: "bird"# The CNI network configuration to install on each node.cni_network_config: |-{"name": "k8s-pod-network","cniVersion": "0.3.0","plugins": [{"type": "calico","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","log_level": "info","mtu": 1500,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"---# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:name: calico-etcd-secretsnamespace: kube-system
data:# Populate the following files with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# This self-hosted install expects three files with the following names.  The values# should be base64 encoded strings of the entire contents of each file.# etcd-key: null# etcd-cert: null# etcd-ca: null---# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:name: calico-nodenamespace: kube-systemlabels:k8s-app: calico-node
spec:selector:matchLabels:k8s-app: calico-nodeupdateStrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1template:metadata:labels:k8s-app: calico-nodeannotations:scheduler.alpha.kubernetes.io/critical-pod: ''spec:hostNetwork: truetolerations:# Make sure calico/node gets scheduled on all nodes.- effect: NoScheduleoperator: Exists# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- effect: NoExecuteoperator: ExistsserviceAccountName: calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds: 0containers:# Runs calico/node container on each Kubernetes node.  This# container programs network policy and routes on each# host.- name: calico-nodeimage: 172.16.5.81:5000/calico/node:v3.1.3env:## 配置绑定的网卡,不然有些node会提示网卡搜索不了- name: IP_AUTODETECTION_METHOD#value: interface=eno4    ## 定义匹配的具体网卡名称#value: interface=en.*   ## 根据网卡的正则匹配所有node的网卡名称#value: can-reach=172.16.5.87  ## 根据目标的IP或者域名来搜索网卡value: first-found        ## 定义第一个找到有效的网卡- name: IP6_AUTODETECTION_METHODvalue: first-found# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Choose the backend to use.- name: CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name: calico-configkey: calico_backend# Cluster type to identify the deployment type- name: CLUSTER_TYPEvalue: "k8s,bgp"# Disable file logging so `kubectl logs` works.- name: CALICO_DISABLE_FILE_LOGGINGvalue: "true"# Set noderef for node controller.- name: CALICO_K8S_NODE_REFvalueFrom:fieldRef:fieldPath: spec.nodeName# Set Felix endpoint to host default action to ACCEPT.- name: FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue: "ACCEPT"# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.- name: CALICO_IPV4POOL_CIDR#value: "192.168.0.0/16"value: "10.1.0.0/24"- name: CALICO_IPV4POOL_IPIPvalue: "Always"# Disable IPv6 on Kubernetes.- name: FELIX_IPV6SUPPORTvalue: "false"# Set Felix logging to "info"- name: FELIX_LOGSEVERITYSCREENvalue: "info"# Set MTU for tunnel device used if ipip is enabled- name: FELIX_IPINIPMTUvalue: "1440"# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Auto-detect the BGP IP address.- name: IPvalue: "autodetect"- name: FELIX_HEALTHENABLEDvalue: "true"securityContext:privileged: trueresources:requests:cpu: 250mlivenessProbe:httpGet:path: /livenessport: 9099periodSeconds: 10initialDelaySeconds: 10failureThreshold: 6readinessProbe:httpGet:path: /readinessport: 9099periodSeconds: 10volumeMounts:- mountPath: /lib/modulesname: lib-modulesreadOnly: true- mountPath: /var/run/caliconame: var-run-calicoreadOnly: false- mountPath: /var/lib/caliconame: var-lib-calicoreadOnly: false- mountPath: /calico-secretsname: etcd-certs# This container installs the Calico CNI binaries# and CNI network config file on each node.- name: install-cniimage: 172.16.5.81:5000/calico/cni:v3.1.3command: ["/install-cni.sh"]env:# Name of the CNI config file to create.- name: CNI_CONF_NAMEvalue: "10-calico.conflist"# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# The CNI network config to install on each node.- name: CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name: calico-configkey: cni_network_configvolumeMounts:- mountPath: /host/opt/cni/binname: cni-bin-dir- mountPath: /host/etc/cni/net.dname: cni-net-dir- mountPath: /calico-secretsname: etcd-certsvolumes:# Used by calico/node.- name: lib-moduleshostPath:path: /lib/modules- name: var-run-calicohostPath:path: /var/run/calico- name: var-lib-calicohostPath:path: /var/lib/calico# Used to install CNI.- name: cni-bin-dirhostPath:path: /opt/cni/bin- name: cni-net-dirhostPath:path: /etc/cni/net.d# hostpath certs- name: etcd-certshostPath:path: /etc/calico/calicoTLS# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/#- name: etcd-certs#  secret:#    secretName: calico-etcd-secrets#    defaultMode: 0400---# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod: ''
spec:# The controllers can only have a single active instance.replicas: 1strategy:type: Recreatetemplate:metadata:name: calico-kube-controllersnamespace: kube-systemlabels:k8s-app: calico-kube-controllersspec:# The controllers must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork: truetolerations:# Mark the pod as a critical add-on for rescheduling.- key: CriticalAddonsOnlyoperator: Exists- key: node-role.kubernetes.io/mastereffect: NoScheduleserviceAccountName: calico-kube-controllerscontainers:- name: calico-kube-controllersimage: 172.16.5.81:5000/calico/kube-controllers:v3.1.3env:# The location of the Calico etcd cluster.- name: ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name: calico-configkey: etcd_endpoints# Location of the CA certificate for etcd.- name: ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_ca# Location of the client key for etcd.- name: ETCD_KEY_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_key# Location of the client certificate for etcd.- name: ETCD_CERT_FILEvalueFrom:configMapKeyRef:name: calico-configkey: etcd_cert# Choose which controllers to run.- name: ENABLED_CONTROLLERSvalue: policy,profile,workloadendpoint,nodevolumeMounts:# Mount in the etcd TLS secrets.- mountPath: /calico-secretsname: etcd-certsvolumes:# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/- name: etcd-certshostPath: path: /etc/calico/calicoTLS#  secret:#    secretName: calico-etcd-secrets#    defaultMode: 0400---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-kube-controllersnamespace: kube-system---apiVersion: v1
kind: ServiceAccount
metadata:name: calico-nodenamespace: kube-system
[root@server81 install_Calico]#

9.执行yaml进行calico部署

[root@server81 install_Calico]# kubectl apply -f calico.yaml
configmap/calico-config created
secret/calico-etcd-secrets created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
[root@server81 install_Calico]#
[root@server81 install_Calico]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   NotReady   <none>    2h        v1.11.0
172.16.5.86   NotReady   <none>    2h        v1.11.0
172.16.5.87   NotReady   <none>    2h        v1.11.0
[root@server81 install_Calico]# kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
172.16.5.81   Ready      <none>    2h        v1.11.0
172.16.5.86   NotReady   <none>    2h        v1.11.0
172.16.5.87   NotReady   <none>    2h        v1.11.0
[root@server81 install_Calico]#
[root@server81 install_Calico]# kubectl get pod -n kube-system
NAME                                       READY     STATUS              RESTARTS   AGE
calico-kube-controllers-795885ddbd-dr8t7   1/1       Running             0          1m
calico-node-26brb                          0/2       ContainerCreating   0          1m
calico-node-w2ntg                          0/2       ContainerCreating   0          1m
calico-node-zk8ch                          2/2       Running             0          1m
[root@server81 install_Calico]#

可以从上面看出server81的服务器已经是ready状态了,而另外的两台因为没有calico访问etcd的TLS文件,导致无法部署成功,那么下面把文件拷贝到另外两台服务器。


10.将calico访问etcd的TLS文件拷贝至server86/87服务器

[root@server81 install_Calico]# cd /etc/
[root@server81 etc]# scp -r calico root@server86:/etc/
etcd-ca                                                   100% 1346   340.4KB/s   00:00
etcd-key                                                  100% 1679   485.3KB/s   00:00
etcd-cert                                                 100% 1436   433.1KB/s   00:00
[root@server81 etc]#
[root@server81 etc]# scp -r calico root@server87:/etc/
etcd-ca                                                   100% 1346   400.0KB/s   00:00
etcd-key                                                  100% 1679   400.8KB/s   00:00
etcd-cert                                                 100% 1436   535.7KB/s   00:00
[root@server81 etc]#
[root@server81 etc]# kubectl get pod -n kube-system
NAME                                       READY     STATUS              RESTARTS   AGE
calico-kube-controllers-795885ddbd-dr8t7   1/1       Running             0          3m
calico-node-26brb                          0/2       ContainerCreating   0          3m
calico-node-w2ntg                          0/2       ContainerCreating   0          3m
calico-node-zk8ch                          2/2       Running             0          3m
[root@server81 etc]#

可以看出另外两台拷贝了calico的证书后,还没有部署完成,查看一下日志:

这个问题主要是因为本地仓库采用的是http的访问方式,还没有启用https,那么就需要给docker配置非安全的访问方式了。

11.配置docker的非安全方式访问本地仓库

[root@server81 etc]# cd /etc/docker/
[root@server81 docker]# ls
daemon.json  key.json
[root@server81 docker]# cat daemon.json
{"insecure-registries":["172.16.5.81:5000"]}

只需要创建这个daemon.json配置文件,写上非安全仓库的访问地址即可,然后重启docker服务。

将配置文件拷贝至server86/87服务器上。

[root@server81 docker]# scp daemon.json root@server86:/etc/docker/
daemon.json                                             100%   99    19.9KB/s   00:00
[root@server81 docker]# scp daemon.json root@server87:/etc/docker/
daemon.json                                             100%   99    40.8KB/s   00:00
[root@server81 docker]#
[root@server86 docker]# ls
daemon.json  key.json
[root@server86 docker]#
[root@server86 docker]# pwd
/etc/docker
[root@server86 docker]#
[root@server86 docker]# service docker restart
Redirecting to /bin/systemctl restart docker.service
[root@server86 docker]#
[root@server87 ~]# cd /etc/docker/
[root@server87 docker]#
[root@server87 docker]# ls
daemon.json  key.json
[root@server87 docker]#
[root@server87 docker]# cat daemon.json
{"insecure-registries":["172.16.5.81:5000"]}
[root@server87 docker]#
[root@server87 docker]# service docker restart
Redirecting to /bin/systemctl restart  docker.service
[root@server87 docker]#

12.最后确认查看node是否都是ready状态

[root@server81 docker]# kubectl get pod -n kube-system
NAME                                       READY     STATUS    RESTARTS   AGE
calico-kube-controllers-795885ddbd-dr8t7   1/1       Running   0          28m
calico-node-26brb                          2/2       Running   0          28m
calico-node-w2ntg                          2/2       Running   0          28m
calico-node-zk8ch                          2/2       Running   0          28m
[root@server81 docker]#
[root@server81 docker]# kubectl get node
NAME          STATUS    ROLES     AGE       VERSION
172.16.5.81   Ready     <none>    2h        v1.11.0
172.16.5.86   Ready     <none>    2h        v1.11.0
172.16.5.87   Ready     <none>    2h        v1.11.0
[root@server81 docker]#

到此,node的状态全都是ready状态了,下面就可以部署CoreDNS以及相关pod服务进行使用了。


当然,看到这里的朋友肯定会问下一个篇章会打算写什么?
下一个篇章,我将会继续带大家如何去处理好kubernetes和物理服务器所有的DNS域名解析的问题。
点击这里,进入kuberntes部署CoreDNS服务。


kubernetes v1.11 二进制部署篇章目录

  • kubernetes v1.11 二进制部署

    • (一)环境介绍
    • (二)Openssl自签TLS证书
    • (三)master组件部署
    • (四)node组件部署

如果你想要看我写的总体系列文章目录介绍,可以点击kuberntes以及运维开发文章目录介绍

关注微信公众号,回复【资料】、Python、PHP、JAVA、web,则可获得Python、PHP、JAVA、前端等视频资料。

Calico集成kubernetes的CNI网络部署全过程、启用CA自签名相关推荐

  1. Kubernetes — CNI 网络插件规范

    目录 文章目录 目录 CNI CNI 规范 CNI Plugin Main 插件 Bridge 插件 HOST-DEVICE MACVLAN 第三方网络插件 CNI 使用的 I/O 接口虚拟化 CNI ...

  2. 【Kubernetes】k8s网络概念和实操详细说明【calico网络】【含docker不同容器网络互通配置,k8s网络互通配置】【1】

    文章目录 calico网络之间通信配置[docker容器互通流程配置] calico网络原理分析 一.Calico基本介绍 二.Calico结构组成 三.Calico 工作原理 四.Calico网络方 ...

  3. 第148天学习打卡(Kubernetes kubeadm init 成功部署 部署网络插件 部署容器化应用)

    继续安装 c3j9i2htclj6thlta6Z ~]# clear [root@iZ2vc3j9i2htclj6thlta6Z ~]# systemctl stop firewalld [root@ ...

  4. 通过二进制方式_部署CNI网络和集群测试---K8S_Google工作笔记0015

    技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 上面我们已经通过二进制方式,把master节点和node节点,以及kubelet和kube-pro ...

  5. k8s和harbor的集成_在Kubernetes集群上部署高可用Harbor镜像仓库

    在Kubernetes集群上部署高可用Harbor镜像仓库 一.Kubernetes上的高可用Harbor方案 首先,我可以肯定给出一个回答:Harbor支持在Kubernetes部署.只不过Harb ...

  6. Kubernetes Docker 容器网络终极之战(十四)

    与 Docker 默认的网络模型不同,Kubernetes 形成了一套自己的网络模型,该网络模型更加适应传统的网络模式,应用能够平滑的从非容器环境迁移到 Kubernetes 环境中. 自从 Dock ...

  7. Calico on Kubernetes 从入门到精通

    第一部分 How about Calico About Calico Calico为容器和虚拟机工作负载提供一个安全的网络连接. Calico可以创建并管理一个3层平面网络,为每个工作负载分配一个完全 ...

  8. K8S CNI及各CNI网络解决方案简述

    CNi: 什么是CNI? CNI是Container Network Interface的缩写,是一个标准的通用的接口.为了让用户在容器创建或销毁时都能够更容易地配置容器网络,现在容器平台:docke ...

  9. 追踪 Kubernetes 中的网络流量

    作者 | Addo Zhang 来源 | 云原生指北 译者注: 这篇文章很全面的罗列出了 Kubernetes 中涉及的网络知识,从 Linux 内核的网络内容,到容器.Kubernetes,一一进行 ...

最新文章

  1. ​用 Python 动态可视化,看看比特币这几年
  2. Linux有问必答-如何创建和挂载XFS文件系统
  3. facebox目标检测改进
  4. java枚举使用示例
  5. python 树状图可视化_Python可视化25|seaborn矩阵图
  6. 安卓案例:使用MPAndroidChart绘制饼状图、柱状图和折线图
  7. python get请求带参数_python_request的安装及模拟json的post请求及带参数的get请求
  8. 医药电子 | 温度传感器的类型原理特点和应用
  9. 个人博客网站搭建详细视频教程和源码
  10. 三相滤波器怎么接线_三相电源滤波器作用 详解三相电源滤波器
  11. python爬虫之如何建立一个自己的代理IP池
  12. 怎样用photoshop放大图片可以不失真的方法
  13. leetcode79 word serach 解题报告
  14. 什么是路由守卫?有什么用?
  15. 2021-11-26 pyautogui 配合雷电模拟器实现手机APP签到答题自动化
  16. REASON: Ambari Server java process has stopped. Please check the logs for more information.
  17. ijkplayer编译生成aar,支持https,rtsp,录制与截图
  18. 30岁的问题,为什么有人说程序员只能干到30岁。
  19. 什么是面向对象编程(Java)
  20. python c++情侣网名是什么意思_c++和Python的选择?

热门文章

  1. html+css响应式旅游主题网站模板,旅游网站,企业文化新闻类网站,简单web假期课程作业
  2. springboot+vue博客项目(码神之路博客项目)
  3. UVa 10074 - Take the Land
  4. 文件夹总是在新窗口打开
  5. gitlab删除项目时 ,没有删除选项
  6. 华为云云商店星品入“沪”,加速产业数字共赢!
  7. 华为hcie中QOS 流量整形 双速率的概念-ielab
  8. 使用Fiddler监听手机App访问的API
  9. 楚琳Web打印控件可以在MVC中调用嘛?
  10. 《自然语言处理学习之路》 13 RNN简述,LSTM情感分析