1.环境说明

[root@k8s-master ~]# uname -a
Linux slave1 4.11.0-22.el7a.aarch64 #1 SMP Sun Sep 3 13:39:10 CDT 2017 aarch64 aarch64 aarch64 GNU/Linux
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (AltArch)
主机名 IP 功能
k8s-master 10.2.152.78 master
k8s-node1 10.2.152.72

node

2、修改master和node的hosts文件

[root@k8s-master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain610.2.152.78   k8s-master
10.2.152.72 k8s-node1

3、安装ntp实现所有服务器间的时间同步

$:yum install ntp -y
$:vim /etc/ntp.conf 21 server 10.2.152.72 iburst       #目标服务器网络位置22 #server 0.centos.pool.ntp.org iburst    #一下三个是CentOS官方的NTP服务器,我们注释掉23 #server 1.centos.pool.ntp.org iburst24 #server 2.centos.pool.ntp.org iburst25 #server 3.centos.pool.ntp.org iburst
$:systemctl start ntpd.service
$:systemctl enable ntpd.service
$:systemctl status ntpd.service
● ntpd.service - Network Time ServiceLoaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)Active: active (running) since Thu 2018-09-06 10:44:05 CST; 1 day 3h agoMain PID: 2334 (ntpd)CGroup: /system.slice/ntpd.service└─2334 /usr/sbin/ntpd -u ntp:ntp -gSep 06 11:01:33 slave1 ntpd[2334]: new interface(s) found: wak...r
Sep 06 11:06:54 slave1 ntpd[2334]: 0.0.0.0 0618 08 no_sys_peer
Sep 07 09:26:34 slave1 ntpd[2334]: Listen normally on 8 flanne...3
Sep 07 09:26:34 slave1 ntpd[2334]: Listen normally on 9 flanne...3
Sep 07 09:26:34 slave1 ntpd[2334]: new interface(s) found: wak...r
Sep 07 09:56:32 slave1 ntpd[2334]: Listen normally on 10 docke...3
Sep 07 09:56:32 slave1 ntpd[2334]: Listen normally on 11 flann...3
Sep 07 09:56:32 slave1 ntpd[2334]: Deleting interface #9 flann...s
Sep 07 09:56:32 slave1 ntpd[2334]: Deleting interface #7 docke...s
Sep 07 09:56:32 slave1 ntpd[2334]: new interface(s) found: wak...r
Hint: Some lines were ellipsized, use -l to show in full.

4、关闭master和node的防火墙和selinux

$:sudo systemctl stop firewalld
$:sudo systemctl disable firewalld
$:sudo vim /etc/selinux/config # This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted$:reboot  //重启服务器

5、master和node上安装docker

(1)安装依赖包

$:yum install -y yum-utils device-mapper-persistent-data lvm2

(2)添加docker软件包的yum源

$:yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

(3)关闭测试版本list(只显示稳定版)

$:yum-config-manager --enable docker-ce-edge
$:yum-config-manager --enable docker-ce-test

(4)更新yum包索引

$:yum makecache fast

(5)安装Docker

NO.1:直接安装Docker  CE(will  always install the highest possible version)

$:yum install docker-ce

NO.2:安装指定版本的Docker CE

$:yum list docker-ce --showduplicates|sort -r      #找到需要安装的
$:sudo yum install docker-ce-18.06.0.ce -y    #启动docker
$:systemctl start docker & systemctl enable docker

Error:

因为之前安装过旧版本的docker,安装时出现以下报错信息:

Transaction check error:

file /usr/bin/docker from install of docker-ce-17.12.0.ce-1.el7.centos.x86_64 conflicts with file from package docker-common-2:1.12.6-68.gitec8512b.el7.centos.aarch_64

  file /usr/bin/docker-containerd from install of docker-ce-17.12.0.ce-1.el7.centos.x86_64 conflicts with file from package docker-common-2:1.12.6-68.gitec8512b.el7.centos.aarch_64

  file /usr/bin/docker-containerd-shim from install of docker-ce-17.12.0.ce-1.el7.centos.x86_64 conflicts with file from package docker-common-2:1.12.6-68.gitec8512b.el7.centos.aarch_64

  file /usr/bin/dockerd from install of docker-ce-17.12.0.ce-1.el7.centos.x86_64 conflicts with file from package docker-common-2:1.12.6-68.gitec8512b.el7.centos.aarch_64

卸载旧版本的docker包

$:yum erase docker-common-2:1.12.6-68.gitec8512b.el7.centos.aarch_64

再次安装docker!!!

Error:

安装完docker用“docker  version”chak查看docker版本报:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

解决方法:

配置DOCKER_HOST

$:vim /etc/profile.d/docker.sh
内容如下
export DOCKER_HOST=tcp://localhost:2375 

应用

$:source /etc/profile
$:source /etc/bashrc

配置启动文件

$:vim /lib/systemd/system/docker.service
将
ExecStart=/usr/bin/dockerd
修改为
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -H tcp://0.0.0.0:7654

注:2375 是管理端口 ;7654 是备用端口

重载配置和重启

$:systemctl daemon-reload
$:systemctl restart docker.service

查看

docker version
输出
Client:Version:      18.03.1-ceAPI version:  1.37Go version:   go1.9.5Git commit:   9ee9f40Built:        Thu Apr 26 07:20:16 2018OS/Arch:      linux/amd64Experimental: falseOrchestrator: swarm
Server:Engine:Version:      18.03.1-ceAPI version:  1.37 (minimum version 1.12)Go version:   go1.9.5Git commit:   9ee9f40Built:        Thu Apr 26 07:23:58 2018OS/Arch:      linux/amd64Experimental: false

6、master和node上安装k8s

(1)更换yum源为阿里源

vim   /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*

(2)yum安装k8s

$:yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

(3)启动k8s服务

 systemctl enable kubelet && systemctl start kubelet

(4)查看版本号

$:kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}

(5)配置iptable

$:vim  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0$:sysctl --system

7.关掉swap

$:sudo swapoff -a#要永久禁掉swap分区,打开如下文件注释掉swap那一行# sudo vi /etc/stab

8.安装etcd和flannel(master上安装etcd+flannel,node上只安装flannel)

$:yum  -y  install  etcd
$:systemctl start etcd;systemctl enable etcd$:yum  -y  install  flannel

9.master上初始化镜像

$:kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.2.0.0/16 --apiserver-advertise-address=10.2.152.78
#这里是之前所安装K8S的版本号;这里填写集群所在网段
输出:
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0909 11:13:01.251094   31919 kernel_validator.go:81] Validating kernel version
I0909 11:13:01.252496   31919 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.152.78]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.152.78 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 75.007389 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node localhost.localdomain as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node localhost.localdomain as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[bootstraptoken] using token: dlo2ec.ynlr9uyocy9vdnvr
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join 10.2.152.78:6443 --token dlo2ec.ynlr9uyocy9vdnvr --discovery-token-ca-cert-hash sha256:0457cd2a8ffcf91707a71c4ef6d8717e2a8a6a2c13ad01fa1fc3f15575e28534

根据输出执行:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

集群主节点安装成功,这里要记得保存这条命令,以便之后各个节点加入集群:

You can now join any number of machines by running the following on each node

as root:

kubeadm join 10.2.152.78:6443 --token dlo2ec.ynlr9uyocy9vdnvr --discovery-token-ca-cert-hash sha256:0457cd2a8ffcf91707a71c4ef6d8717e2a8a6a2c13ad01fa1fc3f15575e28534

Error:

执行初始化报错:

ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR Port-10250]: Port 10250 is in use

原因及解决方法:

kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。

10.配置kubetl认证信息

$:echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile(本人采用)
或
$:export KUBECONFIG=/etc/kubernetes/admin.conf

11.配置flannel网络

参考:https://github.com/coreos/flannel

$:kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这里就表示执行完毕了,可以去主节点执行命令:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    8h        v1.11.2

12.添加node节点到mastersh上

在node上执行master初始化保存下来的输出:

kubeadm join 10.2.152.78:6443 --token dlo2ec.ynlr9uyocy9vdnvr --discovery-token-ca-cert-hash sha256:0457cd2a8ffcf91707a71c4ef6d8717e2a8a6a2c13ad01fa1fc3f15575e28534

切换的master上执行:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    8h        v1.11.2
k8s-node1    Ready     <none>    7h        v1.11.2
    注:Ready显示需要等待一会

13.docker补充设置(可有可无)

$:vim /usr/lib/systemd/system/docker.serviceEnvironment="HTTPS_PROXY=http://www.ik8s.io:10080"Environment="NO_PROXY=127.0.0.0/8,127.20.0.0.0/16"ExecStart=/usr/bin/dockerd$:systemctl daemon-reload
$:systemctl start docker
$: docker infoContainers: 10Running: 0Paused: 0Stopped: 10Images: 14Server Version: 18.06.0-ceStorage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueLogging Driver: json-fileCgroup Driver: cgroupfsPlugins:Volume: localNetwork: bridge host macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcInit Binary: docker-initcontainerd version: d64c661f1d51c48782c9cec8fda7604785f93587runc version: 69663f0bd4b60df09991c08812a60108003fa340init version: fec3683Security Options:seccompProfile: defaultKernel Version: 4.11.0-22.el7a.aarch64Operating System: CentOS Linux 7 (AltArch)OSType: linuxArchitecture: aarch64CPUs: 40Total Memory: 95.15GiBName: k8s-masterID: F7B7:H45H:DFR5:BLRY:6EKG:EFV5:JPMR:YOJW:MGMA:HUK2:UMBD:CM6BDocker Root Dir: /var/lib/dockerDebug Mode (client): falseDebug Mode (server): falseRegistry: https://index.docker.io/v1/Labels:Experimental: falseInsecure Registries:127.0.0.0/8Live Restore Enabled: false
$:cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
$:cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

14、服务器配置总结

节点

运行服务

Master

etcd

kube-apiserver

kube-controller-manager

kube-scheduler

kube-proxy

kubelet

docker

flanneld

node

flanneld

docker

kube-proxy

kubelet

15.k8s测试



部署Dashboard插件

1、下载Dashboard插件配置文件

$:mkdir -p ~/k8s
$:cd ~/k8s
$:wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

2、编辑kubernetes-dashboard.yaml文件,在Dashboard Service中添加type: NodePort,暴露Dashboard服务

# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-system
spec:type: NodePortports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard

3、安装Dashboard插件

$:kubectl create -f kubernetes-dashboard.yaml

报错信息:

Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists

Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists

原因及解决:

services kubernetes-dashboard已经存在了,但是这个在kubectl get services 是看不到的,可以通过以下命令删除然后重新创建!

$:kubectl delete -f kubernetes-dashboard.yaml

4、授予Dashboard账户集群管理权限 
创建一个kubernetes-dashboard-admin的ServiceAccount并授予集群admin的权限,创建kubernetes-dashboard-admin.rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-adminnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard-adminlabels:k8s-app: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: kubernetes-dashboard-adminnamespace: kube-system

执行

[root@k8s-master ~]# kubectl create -f kubernetes-dashboard-admin.rbac.yaml
serviceaccount "kubernetes-dashboard-admin" created
clusterrolebinding "kubernetes-dashboard-admin" created

5、查看kubernete-dashboard-admin的token

[root@k8s-master ~]# kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-jxq7l   kubernetes.io/service-account-token   3         22h
[root@k8s-master ~]# kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-jxq7l
Name:         kubernetes-dashboard-admin-token-jxq7l
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-adminkubernetes.io/service-account.uid=686ee8e9-ce63-11e7-b3d5-080027d38be0
Type:  kubernetes.io/service-account-token
Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1qeHE3bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY4NmVlOGU5LWNlNjMtMTFlNy1iM2Q1LTA4MDAyN2QzOGJlMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.Ua92im86o585ZPBfsOpuQgUh7zxgZ2p1EfGNhr99gAGLi2c3ss-2wOu0n9un9LFn44uVR7BCPIkRjSpTnlTHb_stRhHbrECfwNiXCoIxA-1TQmcznQ4k1l0P-sQge7YIIjvjBgNvZ5lkBNpsVanvdk97hI_kXpytkjrgIqI-d92Lw2D4xAvHGf1YQVowLJR_VnZp7E-STyTunJuQ9hy4HU0dmvbRXBRXQ1R6TcF-FTe-801qUjYqhporWtCaiO9KFEnkcYFJlIt8aZRSL30vzzpYnOvB_100_DdmW-53fLWIGYL8XFnlEWdU1tkADt3LFogPvBP4i9WwDn81AwKg_Q
ca.crt:     1025 bytes

6、查看Dashboard服务端口

  • [root@master k8s]# kubectl get svc -n kube-system
    NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
    kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   1d
    kubernetes-dashboard   NodePort    10.102.209.161   <none>        443:32513/TCP   21h

7.打开浏览器访问UI界面:https://10.2.152.78:30000

出错:

浏览器访问web界面失败

8.kubectl使用

$:kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
$:kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
k8s-master   Ready     master    1d        v1.11.2
k8s-node1    Ready     <none>    1d        v1.11.2
[root@k8s-master ~]# kubectl run kubernetes-bootcamp --image=jocatalin/kubernetes-bootcamp
deployment.apps/kubernetes-bootcamp created
[root@k8s-master ~]# kubectl get deployments
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-bootcamp   1         1         1            0           3s
[root@k8s-master ~]# kubectl get pods
NAME                                   READY     STATUS             RESTARTS   AGE
kubernetes-bootcamp-589d48ddb4-qkn5s   0/1       ImagePullBackOff   0          54s
[root@k8s-master ~]# kubectl get pods -o wide
NAME                                   READY     STATUS             RESTARTS   AGE       IP          NODE        NOMINATED NODE
kubernetes-bootcamp-589d48ddb4-qkn5s   0/1       ImagePullBackOff   0          1m        10.2.1.12   k8s-node1   <none>

[root@k8s-master ~]# journalctl -f         #查看k8s的运行状态

参考:

https://segmentfault.com/a/1190000015787725

http://blog.51cto.com/douya/1945382

ARM架构服务器centos7.4上yum安装k8s教程相关推荐

  1. 阿里云ECS服务器 Centos7.2 使用 yum 安装 ansible 报错

    #####################使用阿里云的ECS服务器Centos7.2系统安装ansible提示安装不上########### 原因 通过Yum安装最新发布版本 通过Yum安装RPMs适 ...

  2. centos7中使用yum安装tomcat以及它的启动、停止、重启

    centos7中使用yum安装tomcat 介绍 Apache Tomcat是用于提供Java应用程序的Web服务器和servlet容器. Tomcat是Apache Software Foundat ...

  3. CentOS7 linux下yum安装redis以及使用

    CentOS7 linux下yum安装redis以及使用 1.安装redis数据库 1 yum install redis 2.下载fedora的epel仓库 yum install epel-rel ...

  4. arm放弃服务器芯片,ARM溃败:Applied Micro拆分ARM架构服务器芯片业务

    上月下旬通信芯片厂商MACOM达成最终协议以约7.7亿美元收购Applied Micro,日前则已决定只留下后者的高速载波和数据中心连网芯片业务,分拆它的ARM架构服务器芯片业务,这对ARM在服务器芯 ...

  5. CentOS7下通过yum安装p7zip

    CentOS7下通过yum安装p7zip 现象 云ECS主机上,没有装p7zip,然后通过yum安装,发现也没有这样包.再仔细检查了下,发现本机环境中,没有启用epel源,致使通过yum无法安装p7z ...

  6. 80核处理器_华为首款Arm架构服务器CPU鲲鹏920发布:64核主频2.6GHz ,性能创纪录!...

    2019年1月7日,华为正式推出业界最高性能Arm架构服务器芯片--鲲鹏920(Kunpeng 920),以及基于鲲鹏920的三款TaiShan服务器.华为云服务.同华为还宣布携手产业伙伴推动Arm的 ...

  7. 拆服务器芯片,ARM溃败:Applied Micro拆分ARM架构服务器芯片业务

    上月下旬通信芯片厂商MACOM达成最终协议以约7.7亿美元收购Applied Micro,日前则已决定只留下后者的高速载波和数据中心连网芯片业务,分拆它的ARM架构服务器芯片业务,这对ARM在服务器芯 ...

  8. 阿里云服务器——centos7下源码安装tomcat9

    阿里云服务器--centos7下源码安装tomcat9 (第一次写文章,俺会努力的) 首先进入src文件夹: cd /usr/local/src 使用wget命令下载tomcat : wget htt ...

  9. linux编译blas,Linux系统CentOS 6.8上yum安装BLAS库

    Linux系统CentOS 6.8上yum安装BLAS库 BLAS是一个广泛应用到科学计算软件上面的库文件,在CentOS的软件库中已经有该软件的软件包,我们可以直接使用yum来安装. 1.1.查询C ...

最新文章

  1. 2022-2028年中国相变蜡行业市场前瞻与投资战略规划分析报告
  2. 「要拼就拼运维」5分钟一台?它让我爱上工作了!
  3. 贴别人的一个文件加密程序!
  4. 2021-05-09为什么pip install安装的包anaconda识别不了
  5. 静态html文件js读取url参数
  6. VTK修炼之道28:图像统计_灰度直方图计算
  7. python return用法_初学Python要了解什么 装饰器知识汇总有哪些
  8. 东方财富代码选股_东方证券APP评测:智能选股方面优秀 投顾服务缺失
  9. tensorflow学习之常用函数总结:tensorflow.argmax()函数
  10. linux输入输出重定向详解
  11. mysql 查询 一天的时间_MySQL怎么查询每天打卡的最早时间和最晚时间?
  12. C语言数据结构与算法--------图论全面总结(附有详细动态图解)
  13. PASCAL VOC 2012数据集
  14. 安卓psp模拟器联机教程_让PSP带你回童年FC模拟器联机教程.doc
  15. Linux命令之md5sum
  16. C语言实现埃拉托斯特尼筛法
  17. ICCMO微信公众账号开发系列(1)接入微信公众账号
  18. Android 向右滑动关闭页面
  19. 极客日报:腾讯反舞弊通报近 70 人被辞退;库克遭陌生可疑女子威胁;英伟达回应放弃收购 ARM 传闻
  20. CentOS 7安装图形化界面

热门文章

  1. 我有10个职场经验,价值100万,但今天免费|咪蒙
  2. hibernate的QBC
  3. 简易聊天室(C语言)
  4. 以“魔镜”为代表的智能硬件,为什么火不起来?
  5. Redis 服务监控
  6. vue路由守卫,axios拦截器,权限树
  7. JObject ToObject报错
  8. 如何解决华为云数据库没有外网访问的难题
  9. java 鼠标悬停事件_java swing中如何实现对于鼠标监听悬停事件
  10. Vue _ 后台管理