title: Docker&K8S教程
date: 2023-03-13 18:33:19
tags: [K8S,Docker]
categories: [K8S]


网络策略

1.3开始提供 NetworkPolicy

基于策略的网络控制,用于隔离应用以减少攻击面

pod之间能否通信可通过如下三种组合进行确认:

  • 其他被允许的pods(pod无法限制对自身的访问)
  • 被允许访问的namespace
  • IP CIDR 与pod运行所在节点的通信总是被允许的

默认情况下,pod是非隔离的,它们可以接收任何流量

入站ingress(入口流量)

出站egress(出口流量)

支持网络策略的插件:calico/weave net/romana/trireme

NetworkPolicy需要apiVersion kind metadata字段

NetworkPolicy对象

spec.podSelector

spec:podSelector:matchLabels:role: db

如果podSelector为空, 则选择ns下所有的pod

spec.policyTypes

spec:podSelector:matchLabels:role: dbpolicyTypes:- Ingress- Egressingress:- from:- ipBlock:cidr: 172.17.0.0/16except:- 172.17.1.0/24- namespaceSelector:matchLabels:project: myproject- podSelector:matchLabels:role: frontendports:- protocol: TCPport: 6379egress:- to:- ipBlock:cidr: 10.0.0.0/24ports:- protocol: TCPport: 5978

表示应用于所选pod的入口流量还是出口流量 两者亦可

如果未指定policyTypes则默认情况下始终设置Ingress 如果有任何出口规则的话则设置Egress

总结该网络策略:

  • 隔离default命名空间下role=db的pod
  • 出口限制:允许符合以下条件的pod连接到default名称空间下标签为role=db的所有pod的6379端口:
    • default ns下带有role=frontend标签的所有pod
    • 带有project=myproject标签的所有ns中的pod
    • ip地址范围为172.17.0.0172.17.0.255和172.17.2.0172.17.255.255(即除了172.17.1.0/24之外的所有172.17.0.0/16)
  • 入口限制:允许从带有role-db标签的ns下的所有pod到CIDR 10.0.0.0/24下5978的TCP

spec.ingress

允许入站流量的规则。

from

相关源,可以是其他pod或ip地址、网段等。

ports

限制进出网络流量的协议和端口

spec.egress

允许出站流量的规则

to

相关目的地,可以是其他pod或ip地址、网段等。

ports

限制进出网络流量的协议和端口

污点和容忍

  • 设置节点上的污点

    apiVersion: v1
    kind: Node
    metadata:name: node1
    spec:taints:- key: "example.com/role"value: "worker"effect: "NoSchedule"
    
  • 设置pod上的容忍

    apiVersion: v1
    kind: Pod
    metadata:name: example-pod
    spec:tolerations:- key: "example.com/role"opeator: "Equal"value: "worker"effect: "NoSchedule"containers:- name: example-containerimage: nginx:1.22
    

    Deployment: spec.template.spec.tolerations

亲和性和反亲和性

使用节点标签设置pod亲和性

apiVersion: v1
kind: Pod
metadata:name: example-pod
spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoreDuringExecution:nodeSelectorTerms:- matchExpressions:key: example.com/regionoperator: Invalues:- zone-1containers:- name: example-containerimage: nginx:1.22

example-pod只能在拥有标签example.com/region=zone-1的节点上运行

使用pod标签设置pod亲和性

apiVersion: v1
kind: Pod
metadata:name: example-pod
spec:affinity:podAffinity:requiredDuringSchedulingIgnoreDuringExecution:- labelSelector:matchLExpressions:- key: appoperator: Invalues:- example-apptopologKey: kubernetes.io/hostnamecontainers:- name: example-containerimage: nginx:1.22

example-pod只能运行在其他拥有app=example-app标签的pod所在的节点上。

pod反亲和性

apiVersion: v1
kind: Pod
metadata:name: example-pod
spec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoreDuringExecution:- labelSelector:matchExpessions:- key: appoperator: Invalues:- example-apptopologyKey: kubernetes.io/hostnamecontainers:- name: example-containerimage: nginx:1.22

确保example-pod不会运行在与其他拥有app=example-app标签的pod的相同的节点上。

ingress

安装ingress controller

  • nginx (默认)
  • traefik
  • haproxy
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml

创建ingress对象

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:name: my-ingressannotations:nginx.ingress.kubernetes.io/rewrite-target: /
spec:rules:- host: example.comhttp:paths:- path: /path1backend:serviceName: service1servicePort: 80- path: /path2backend:serviceName: service2servicePort: 80

将example.com的/path1请求转发到service1的80端口

将example.com的/path2请求转发到service2的80端口

K8S使用GPU

安装NVIDIA GPU驱动

节点上安装。

安装nvidia-docker

安装docker-ce

安装节点驱动

安装nvidia-docker

添加repository

curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu18.04/nvidia-docker.list | sudo tee /etc/apt/source.list.d/nvidia-docker.list

执行安装

sudo apt update && sudo apt install -y nvidia-docker2systemctl restart docker

验证安装

sudo docker run --rm nvidia/cuda:9.0-base nvidia-smi

pod中配置GPU资源

apiVersion: v1
kind: Pod
metadata:name: gpu-pod
spec:containers:- name: gpu-containerimage: nvidia/cuda:9.0-baseresources:limits:nvidia.com/gpu: 1

限制使用1个nvidia gpu

为容器设置启动时执行的命令

apiVersion: v1
kind: Pod
metadata:name: command-podlabels:app: command
spec:containers:- name: command-containerimage: debianenv:- name: MSGvalue: "hello world"command: ["printenv"]args: ["HOSTNAME","KUBERNETES_PORT","$(MSG)"]restartPolicy: OnFailure

如果在配置文件中设置了容器启动时要执行的命令及参数,那么容器镜像中自带的命令与参数将会被覆盖而不再执行;

如果yaml中只是设置了参数,却没有设置其对应的命令,那么容器镜像中自带的命令会使用该新参数作为其执行时的参数。

command: ["/bin/sh"]
args: ["-c","while true;do echo hello;sleep 10;done"]

k8s command --> docker ENTRYPOINT

k8s args --> docker CMD

kata containers介绍

  • 是一个可以使用容器镜像以超轻量级虚拟机的形式创建容器的运行时工具
  • 能够支持不同平台的硬件:x86 arm
  • 符合OCI规范
  • 兼容K8S的CRI接口规范
  • runtime agent proxy shim kernel
  • 真正启动docker容器的是RunC 是OCI运行时规范的默认实现 kata containers跟runc同一层级
  • 不同的是kata为每个容器或pod增加了一个独立的linux内核(不共享宿主机的内核),使得容器有更好的隔离性、安全性。

什么是容器运行时

k8s的视角:

  • docker
  • containerd
  • CRI-O

docker、containers等的视角:

  • runc – 容器运行时工具

可作为runc的替代组件

K8S --> CRI(docker cri-containerd CRI-O)

docker shim–dockerd–containerd-shim–runc–C

cri-containerd–containerd–shim–runc–C

CRI-O–conmon–runc–C

组件及其功能介绍

runtime

符合OCI规范的容器运行时命令工具,主要用来创建轻量级虚拟机,通过agent控制虚拟机内容器的生命周期。

agent

运行在虚拟机中的一个运行时代理组件,主要用来执行runtime传给它的指令,在虚拟机内管理容器的生命周期。

shim

以对接docker为例,这里的shim相当于是containerd-shim的一个适配,用来处理容器进程的stdio和signals

kernel

就是提供一个轻量级虚拟机的linux内核,根据不同的需要,提供几个内核选择,最小的内核仅4M多

架构调整

v1.5.0版本之后,基于containerd的集成,对架构进行了调整,也就是将多个组件kata-shim kata-proxy kata-runtime以及containerd-shim的功能集成在同一个二进制文件中 kata-runtime

kata containers结合docker使用

为了进一步加强容器的安全性和隔离性,kata containers是一种轻量级的虚拟化技术。

安装kata containers

export https_proxy=http://localhost:20171curl -fsSL https://github.com/kata-containers/kata-containers/releases/download/3.0.2/kata-static-3.0.2-x86_64.tar.xz -o kata-static-3.0.2-x86_64.tar.xzxz -d kata-static-3.0.2-x86_64.tar.xzsudo tar xvf kata-static-3.0.2-x86_64.tar -C /sudo cp /opt/kata/bin/containerd-shim-kata-v2 /usr/binsudo cp /opt/kata/bin/kata-runtime /usr/binrm -f kata-static-3.0.2-x86_64.tar

check时解决如下报错

zxl@linux:/D/Workspace/Docker/kata_containers-about$ docker run -it --rm --runtime kata-runtime nginx ls /
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: /opt/kata/share/defaults/kata-containers/configuration-qemu.toml: host system doesn't support vsock: stat /dev/vhost-vsock: no such file or directory: unknown.#检测当前系统是否支持vhost-vsock
zxl@linux:~$ grep CONFIG_VIRTIO_VSOCKETS /boot/config-$(uname -r)
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=mzxl@linux:~$ sudo modprobe vhost_vsockzxl@linux:~$ sudo lsmod | grep vsock
vhost_vsock            28672  0
vmw_vsock_virtio_transport_common    40960  1 vhost_vsock
vhost                  53248  2 vhost_vsock,vhost_net
vsock                  49152  2 vmw_vsock_virtio_transport_common,vhost_vsockzxl@linux:~$ kata-runtime check
No newer release available
System is capable of running Kata Containers

docker集成kata containers作为runtime

sudo vim /etc/docker/daemon.json
"runtimes": {"kata-runtime": {"path": "/usr/bin/kata-runtime"}}
systemctl daemon-reloadsystemctl restart dockerdocker info
Runtimes: io.containerd.runc.v2 kata-runtime runc

containerd集成kata containers作为runtime

check

zxl@linux:~$ sudo modprobe vhost_vsockzxl@linux:~$ sudo lsmod | grep vsock
vhost_vsock            28672  0
vmw_vsock_virtio_transport_common    40960  1 vhost_vsock
vhost                  53248  2 vhost_vsock,vhost_net
vsock                  49152  2 vmw_vsock_virtio_transport_common,vhost_vsockzxl@linux:~$ sudo kata-runtime check
WARN[0000] Not running network checks as super user      arch=amd64 name=kata-runtime pid=49537 source=runtime
System is capable of running Kata Containers
System can currently create Kata Containers

配置containerd

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]runtime_type = "io.containerd.kata.v2"

重启:

systemctl restart containerd

测试

zxl@linux:~$ sudo ctr run --rm --runtime "io.containerd.kata.v2" -t docker.io/library/nginx:latest test uname -r
5.19.2zxl@linux:~$ uname -r
5.15.0-67-genericzxl@linux:~$ sudo ctr run --rm  -t docker.io/library/nginx:latest test uname -r
5.15.0-67-generic

查看docker服务日志

查看所有日志

sudo journalctl -u docker.service

排名前十的错误日志

sudo journalctl -u docker.service --since today --until now | grep -i error | head -n 10

查看实时日志

sudo journalctl -u docker.service -f

按日期和时间范围查找日志

sudo journalctl -u docker.service --since "2023-03-15 00:00:00" --until "2023-03-16 23:00:00"

Kuboard Dashboard使用教程

安装

# admin Kuboard123
sudo docker run -d \--restart=unless-stopped \--name=kuboard-local \-p 30080:80/tcp \-p 30081:10081/tcp \-e KUBOARD_ENDPOINT="http://192.168.122.1:30080" \-e KUBOARD_AGENT_SERVER_TCP_PORT="30081" \-e KUBOARD_DISABLE_AUDIT=true \-v /D/CACHE/docker/kuboard-data:/data \swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3#eipwork/kuboard:v3# 也可以使用镜像 swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3 ,可以更快地完成镜像下载。# 请不要使用 127.0.0.1 或者 localhost 作为内网 IP \# Kuboard 不需要和 K8S 在同一个网段,Kuboard Agent 甚至可以通过代理访问 Kuboard Server \# KUBOARD_DISABLE_AUDIT=true 禁用审计功能

添加K8S集群

# master节点
cat /root/.kube/config# 注意apiserver.cluster.local的IP地址问题,为master节点的ip地址

创建NFS存储类

搭建NFS服务

服务端

yum install rpcbind nfs-serversystemctl enable rpcbind
systemctl enable nfs-serversystemctl start rpcbind
systemctl start nfs-servervim /etc/exports
/D/CACHE/NFS *(rw,sync,no_root_squash,no_subtree_check)exportfs -r
exportfs
showmount -e

客户端

yum install -y nfs-utils[root@node1 ~]# showmount -e 192.168.122.1
Export list for 192.168.122.1:
/D/CACHE/NFS *

Kuboard创建NFS存储类

overviewcreate storageclass创建存储类

---
apiVersion: v1
kind: PersistentVolume
metadata:finalizers:- kubernetes.io/pv-protectionname: nfs-pv-nfs-storage
spec:accessModes:- ReadWriteManycapacity:storage: 100GimountOptions: []nfs:path: /D/CACHE/NFSserver: 192.168.122.1persistentVolumeReclaimPolicy: RetainstorageClassName: nfs-storageclass-provisionervolumeMode: Filesystem---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:finalizers:- kubernetes.io/pvc-protectionname: nfs-pvc-nfs-storagenamespace: kube-system
spec:accessModes:- ReadWriteManyresources:requests:storage: 100GistorageClassName: nfs-storageclass-provisionervolumeMode: FilesystemvolumeName: nfs-pv-nfs-storage---
apiVersion: v1
kind: ServiceAccount
metadata:name: eip-nfs-client-provisionernamespace: kube-system---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: eip-nfs-client-provisioner-runner
rules:- apiGroups:- ''resources:- nodesverbs:- get- list- watch- apiGroups:- ''resources:- persistentvolumesverbs:- get- list- watch- create- delete- apiGroups:- ''resources:- persistentvolumeclaimsverbs:- get- list- watch- update- apiGroups:- storage.k8s.ioresources:- storageclassesverbs:- get- list- watch- apiGroups:- ''resources:- eventsverbs:- create- update- patch---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: eip-run-nfs-client-provisioner
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: eip-nfs-client-provisioner-runner
subjects:- kind: ServiceAccountname: eip-nfs-client-provisionernamespace: kube-system---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: eip-leader-locking-nfs-client-provisionernamespace: kube-system
rules:- apiGroups:- ''resources:- endpointsverbs:- get- list- watch- create- update- patch---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: eip-leader-locking-nfs-client-provisionernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: eip-leader-locking-nfs-client-provisioner
subjects:- kind: ServiceAccountname: eip-nfs-client-provisionernamespace: kube-system---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: eip-nfs-nfs-storagename: eip-nfs-nfs-storagenamespace: kube-system
spec:replicas: 1selector:matchLabels:app: eip-nfs-nfs-storagestrategy:type: Recreatetemplate:metadata:labels:app: eip-nfs-nfs-storagespec:containers:- env:- name: PROVISIONER_NAMEvalue: nfs-nfs-storage- name: NFS_SERVERvalue: 192.168.122.1- name: NFS_PATHvalue: /D/CACHE/NFSimage: >-swr.cn-east-2.myhuaweicloud.com/kuboard-dependency/nfs-subdir-external-provisioner:v4.0.2name: nfs-client-provisionervolumeMounts:- mountPath: /persistentvolumesname: nfs-client-rootserviceAccountName: eip-nfs-client-provisionervolumes:- name: nfs-client-rootpersistentVolumeClaim:claimName: nfs-pvc-nfs-storage---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:annotations:k8s.kuboard.cn/storageNamespace: 'mysql-space,default,kuboard'k8s.kuboard.cn/storageType: nfs_client_provisionername: nfs-storage
mountOptions: []
parameters:archiveOnDelete: 'false'
provisioner: nfs-nfs-storage
reclaimPolicy: Delete
volumeBindingMode: Immediate

Kuboard导入示例微服务

  • 创建名称空间
  • 导入、上传yaml文件
  • 执行apply

K8S诊断应用程序

pod一直是pending

  • 一般原因是pod不能被调度到某一个节点上
  • 集群缺乏足够的资源、合适的资源 --> 删除某些pods 调整pod的资源请求 添加节点
  • kubectl describe pod – > Events
  • 资源没有就绪 configmap pvc

pod一直是waiting

  • 容器的镜像名称正确否?
  • 容器镜像推送否?
  • 节点上docker pull / ctr -n k8s.io image pull 能拉取否?

pod处于running状态,但是不工作

  • yaml语法、格式问题排查
kubectl apply -f xxx.yaml --validate
  • pod是否达到预期?
kubectl get pod mypod -o yaml > mypod.yamlkubectl apply -f mp.yaml

istio

安装

curl -L https://istio.io/downloadIstio | sh -cd istio-1.17.1export PATH=$PATH:$PWD/binunset https_proxyistioctl install --set profile=demo

K8S部署MySQL

configmap --> pvc --> svc --> deploy

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:name: my-mysqlnamespace: mysql-space
spec:replicas: 1selector:matchLabels:app: my-mysqltemplate:metadata:labels:app: my-mysqlspec:containers:- name: my-mysqlimage: mysql:8.0.30imagePullPolicy: IfNotPresentenv:- name: MYSQL_ROOT_PASSWORDvalue: root- name: MYSQL_USERvalue: test- name: MYSQL_PASSWORDvalue: testports:- containerPort: 3306protocol: TCPname: httpvolumeMounts:- name: my-mysql-datamountPath: /var/lib/mysql- name: mysql-confmountPath: /etc/mysql/mysql.conf.dvolumes:- name: mysql-confconfigMap:name: mysql-conf- name: my-mysql-datapersistentVolumeClaim:claimName: my-mysql-data

pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: my-mysql-datanamespace: mysql-space
spec:accessModes:- ReadWriteManyresources:requests:storage: 5Gi

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:name: mysql-confnamespace: mysql-space
data:mysql.cnf: |[mysqld]pid-file        = /var/run/mysqld/mysqld.pidsocket          = /var/run/mysqld/mysqld.sockdatadir         = /var/lib/mysqlsymbolic-links=0sql-mode=ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION

Service

apiVersion: v1
kind: Service
metadata:name: mysql-exportnamespace: my-space
spec:type: NodePortselector:app: my-mysqlports:- port: 3306targetPort: 3306nodePort: 32306

连接测试

dbeaver

K8S部署MongoDB

sts

apiVersion: apps/v1
kind: StatefulSet
metadata:name: mongonamespace: my-space
spec:serviceName: mongoreplicas: 3selector:matchLabels:role: mongoenvironment: testtemplate:metadata:labels:role: mongoenvironment: testspec:terminationGracePeriodSeconds: 10containers:- name: mongoimage: 'mongo:3.4'command:- mongod- '--replSet'- rs0- '--bind_ip'- 0.0.0.0- '--smallfiles'- '--noprealloc'ports:- containerPort: 27017volumeMounts:- name: mongo-persistent-storagemountPath: /data/db- name: mongo-sidecarimage: cvallance/mongo-k8s-sidecarenv:- name: MONGO_SIDECAR_POD_LABELSvalue: 'role=mongo,environment=test'volumeClaimTemplates:- metadata:name: mongo-persistent-storagespec:accessModes:- ReadWriteOnceresources:requests:storage: 1Gi

svc

apiVersion: v1
kind: Service
metadata:name: mongonamespace: my-spacelabels:name: mongo
spec:ports:- port: 27017targetPort: 27017clusterIP: Noneselector:role: mongo

连接

mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname

K8S部署Zookeeper

部署

apiVersion: v1
kind: Service
metadata:name: zk-hslabels:app: zk
spec:ports:- port: 2888name: server- port: 3888name: leader-electionclusterIP: Noneselector:app: zk
---
apiVersion: v1
kind: Service
metadata:name: zk-cslabels:app: zk
spec:ports:- port: 2181name: clientselector:app: zk
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: zk-pdb
spec:selector:matchLabels:app: zkmaxUnavailable: 1 # minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: zk
spec:selector:matchLabels:app: zkserviceName: zk-hsreplicas: 3updateStrategy:type: RollingUpdatepodManagementPolicy: Paralleltemplate:metadata:labels:app: zkspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- zktopologyKey: "kubernetes.io/hostname"containers:- name: kubernetes-zookeeperimagePullPolicy: IfNotPresentimage: "kubebiz/zookeeper:3.4.10"resources:requests:memory: "1Gi"cpu: "0.5"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start-zookeeper \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \--conf_dir=/opt/zookeeper/conf \--client_port=2181 \--election_port=3888 \--server_port=2888 \--tick_time=2000 \--init_limit=10 \--sync_limit=5 \--heap=512M \--max_client_cnxns=60 \--snap_retain_count=3 \--purge_interval=12 \--max_session_timeout=40000 \--min_session_timeout=4000 \--log_level=INFO"readinessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5livenessProbe:exec:command:- sh- -c- "zookeeper-ready 2181"initialDelaySeconds: 10timeoutSeconds: 5volumeMounts:- name: datadirmountPath: /var/lib/zookeepersecurityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: datadirspec:storageClassName: "nfs-storage"accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 1Gi

测试

# 连接
zxl@linux:/D/Develop/apache-zookeeper-3.6.2-bin/bin$ ./zkCli.sh -server localhost:30281# 创建ZNode节点并添加数据 获取数据 删除节点 查看子节点
[zk: localhost:30281(CONNECTED) 0] create /zk_test
Created /zk_test
[zk: localhost:30281(CONNECTED) 1] delete /zk_test
[zk: localhost:30281(CONNECTED) 2] create /zk_test test111
Created /zk_test
[zk: localhost:30281(CONNECTED) 3] get /zk_test
test111
[zk: localhost:30281(CONNECTED) 4] ls /zk_test
[]
[zk: localhost:30281(CONNECTED) 5] create /zk_test/test2 test222
Created /zk_test/test2
[zk: localhost:30281(CONNECTED) 6] ls /zk_test
[test2]
[zk: localhost:30281(CONNECTED) 7] get /zk_test/test2
test222
[zk: localhost:30281(CONNECTED) 8]

K8S部署Kafka

部署

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: kafka-pdb
spec:selector:matchLabels:app: kafkamaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: kafka
spec:serviceName: kafka-hsreplicas: 3podManagementPolicy: ParallelupdateStrategy:type: RollingUpdateselector:matchLabels:app: kafkatemplate:metadata:labels:app: kafkaspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- kafkatopologyKey: "kubernetes.io/hostname"podAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 1podAffinityTerm:labelSelector:matchExpressions:- key: "app"operator: Invalues:- zktopologyKey: "kubernetes.io/hostname"terminationGracePeriodSeconds: 300containers:- name: k8skafkaimagePullPolicy: IfNotPresentimage: kubebiz/kafka:0.10.2.1resources:requests:memory: "1Gi"cpu: "0.5"ports:- containerPort: 9093name: servercommand:- sh- -c- "exec kafka-server-start.sh /opt/kafka/config/server.properties \--override broker.id=${HOSTNAME##*-} \--override listeners=PLAINTEXT://:9093 \--override zookeeper.connect=zk-cs.default.svc.cluster.local:2181 \--override log.dirs=/var/lib/kafka \--override auto.create.topics.enable=true \--override auto.leader.rebalance.enable=true \--override background.threads=10 \--override compression.type=producer \--override delete.topic.enable=false \--override leader.imbalance.check.interval.seconds=300 \--override leader.imbalance.per.broker.percentage=10 \--override log.flush.interval.messages=9223372036854775807 \--override log.flush.offset.checkpoint.interval.ms=60000 \--override log.flush.scheduler.interval.ms=9223372036854775807 \--override log.retention.bytes=-1 \--override log.retention.hours=168 \--override log.roll.hours=168 \--override log.roll.jitter.hours=0 \--override log.segment.bytes=1073741824 \--override log.segment.delete.delay.ms=60000 \--override message.max.bytes=1000012 \--override min.insync.replicas=1 \--override num.io.threads=8 \--override num.network.threads=3 \--override num.recovery.threads.per.data.dir=1 \--override num.replica.fetchers=1 \--override offset.metadata.max.bytes=4096 \--override offsets.commit.required.acks=-1 \--override offsets.commit.timeout.ms=5000 \--override offsets.load.buffer.size=5242880 \--override offsets.retention.check.interval.ms=600000 \--override offsets.retention.minutes=1440 \--override offsets.topic.compression.codec=0 \--override offsets.topic.num.partitions=50 \--override offsets.topic.replication.factor=3 \--override offsets.topic.segment.bytes=104857600 \--override queued.max.requests=500 \--override quota.consumer.default=9223372036854775807 \--override quota.producer.default=9223372036854775807 \--override replica.fetch.min.bytes=1 \--override replica.fetch.wait.max.ms=500 \--override replica.high.watermark.checkpoint.interval.ms=5000 \--override replica.lag.time.max.ms=10000 \--override replica.socket.receive.buffer.bytes=65536 \--override replica.socket.timeout.ms=30000 \--override request.timeout.ms=30000 \--override socket.receive.buffer.bytes=102400 \--override socket.request.max.bytes=104857600 \--override socket.send.buffer.bytes=102400 \--override unclean.leader.election.enable=true \--override zookeeper.session.timeout.ms=6000 \--override zookeeper.set.acl=false \--override broker.id.generation.enable=true \--override connections.max.idle.ms=600000 \--override controlled.shutdown.enable=true \--override controlled.shutdown.max.retries=3 \--override controlled.shutdown.retry.backoff.ms=5000 \--override controller.socket.timeout.ms=30000 \--override default.replication.factor=1 \--override fetch.purgatory.purge.interval.requests=1000 \--override group.max.session.timeout.ms=300000 \--override group.min.session.timeout.ms=6000 \--override inter.broker.protocol.version=0.10.2-IV0 \--override log.cleaner.backoff.ms=15000 \--override log.cleaner.dedupe.buffer.size=134217728 \--override log.cleaner.delete.retention.ms=86400000 \--override log.cleaner.enable=true \--override log.cleaner.io.buffer.load.factor=0.9 \--override log.cleaner.io.buffer.size=524288 \--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \--override log.cleaner.min.cleanable.ratio=0.5 \--override log.cleaner.min.compaction.lag.ms=0 \--override log.cleaner.threads=1 \--override log.cleanup.policy=delete \--override log.index.interval.bytes=4096 \--override log.index.size.max.bytes=10485760 \--override log.message.timestamp.difference.max.ms=9223372036854775807 \--override log.message.timestamp.type=CreateTime \--override log.preallocate=false \--override log.retention.check.interval.ms=300000 \--override max.connections.per.ip=2147483647 \--override num.partitions=1 \--override producer.purgatory.purge.interval.requests=1000 \--override replica.fetch.backoff.ms=1000 \--override replica.fetch.max.bytes=1048576 \--override replica.fetch.response.max.bytes=10485760 \--override reserved.broker.max.id=1000 "env:- name: KAFKA_HEAP_OPTSvalue : "-Xmx512M -Xms512M"- name: KAFKA_OPTSvalue: "-Dlogging.level=INFO"volumeMounts:- name: datadirmountPath: /var/lib/kafkareadinessProbe:exec:command:- sh- -c- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"securityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: datadirspec:storageClassName: "nfs-storage"accessModes: [ "ReadWriteOnce" ]resources:requests:storage: 10Gi

测试

kafka@kafka-0:/$ kafka-topics.sh --create --zookeeper zk-cs.default.svc.cluster.local:2181 --replication-factor 1 --partitions 1 --topic test-topic
Created topic "test-topic".
kafka@kafka-0:/$ kafka-topics.sh --delete --zookeeper zk-cs.default.svc.cluster.local:2181 --topic test-topic
Topic test-topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
kafka@kafka-0:/$

K8S部署Harbor

helm方式部署

version: 2.7.1

root@linux:~/harbor-helm# helm repo add harbor https://helm.goharbor.ioroot@linux:~/harbor-helm# helm fatech harbor/harbor --untarroot@linux:~/harbor-helm# export KUBECONFIG=/etc/rancher/k3s/k3s.yamlroot@linux:~/harbor-helm# helm install myharbor harbor -n harbor

通过ingress访问harbor

vim /etc/hosts
127.0.0.1               core.harbor.domainhttp://core.harbor.domain

生成https自签名证书

# Generate a Certificate Authority Certificate
openssl genrsa -out ca.key 4096openssl req -x509 -new -nodes -sha512 -days 3650 \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=core.harbor.domain" \-key ca.key \-out ca.crt# Generate a Server Certificate
openssl genrsa -out core.harbor.domain.key 4096openssl req -sha512 -new \-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=core.harbor.domain" \-key core.harbor.domain.key \-out core.harbor.domain.csrcat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=core.harbor.domain
DNS.2=core.harbor
DNS.3=hostname
EOF   openssl x509 -req -sha512 -days 3650 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in core.harbor.domain.csr \-out core.harbor.domain.crt

配置证书

k delete secret myharbor-ingress -n harbork create secret -n harbor generic myharbor-ingress --from-file=ca.crt=ca.crt --from-file=tls.crt=core.harbor.domain.crt --from-file=tls.key=core.harbor.domain.key

访问测试

https://core.harbor.domain

K8S部署ElasticSearch

sts

apiVersion: apps/v1
kind: StatefulSet
metadata:name: es-clusternamespace: elk
spec:serviceName: es-clusterreplicas: 3selector:matchLabels:app: es-clustertemplate:metadata:name: es-clusterlabels:app: es-clusterspec:initContainers:- name: init-sysctlimage: busyboxcommand:- sysctl- '-w'- vm.max_map_count=262144imagePullPolicy: IfNotPresentsecurityContext:privileged: truecontainers:- image: 'docker.elastic.co/elasticsearch/elasticsearch:7.17.5'name: esenv:- name: node.namevalueFrom:fieldRef:fieldPath: metadata.name- name: cluster.namevalue: my-cluster- name: cluster.initial_master_nodesvalue: 'es-cluster-0,es-cluster-1,es-cluster-2'- name: discovery.seed_hostsvalue: es-cluster- name: network.hostvalue: _site_- name: ES_JAVA_OPTSvalue: '-Xms512m -Xmx512m'volumeMounts:- name: es-cluster-datamountPath: /usr/share/elasticsearch/datavolumes:- name: es-cluster-dataemptyDir: {}

svc-headless

apiVersion: v1
kind: Service
metadata:name: es-clusternamespace: elk
spec:clusterIP: Noneports:- name: httpport: 9200- name: tcpport: 9300selector:app: es-cluster

nodeport

apiVersion: v1
kind: Service
metadata:name: es-cluster-nodeportnamespace: elk
spec:type: NodePortselector:app: es-clusterports:- name: httpport: 9200targetPort: 9200nodePort: 31200- name: tcpport: 9300targetPort: 9300nodePort: 31300

K8S部署Filebeat

DaemonSet

apiVersion: apps/v1
kind: DaemonSet
metadata:name: filebeatnamespace: elklabels:k8s-app: filebeat
spec:selector:matchLabels:k8s-app: filebeattemplate:metadata:labels:k8s-app: filebeatspec:serviceAccountName: filebeatterminationGracePeriodSeconds: 30hostNetwork: truednsPolicy: ClusterFirstWithHostNetcontainers:- name: filebeatimage: 'docker.elastic.co/beats/filebeat:7.17.5'args:- '-c'- /etc/filebeat.yml- '-e'env:- name: ELASTICSEARCH_HOSTvalue: es-cluster- name: ELASTICSEARCH_PORTvalue: '9200'- name: ELASTICSEARCH_USERNAMEvalue: elastic- name: ELASTICSEARCH_PASSWORDvalue: "123456"- name: ELASTIC_CLOUD_IDvalue: null- name: ELASTIC_CLOUD_AUTHvalue: null- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeNamesecurityContext:runAsUser: 0resources:limits:memory: 200Mirequests:cpu: 100mmemory: 100MivolumeMounts:- name: configmountPath: /etc/filebeat.ymlreadOnly: truesubPath: filebeat.yml- name: datamountPath: /usr/share/filebeat/data- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: varlogmountPath: /var/logreadOnly: truevolumes:- name: configconfigMap:defaultMode: 416name: filebeat-config- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: varloghostPath:path: /var/log- name: datahostPath:path: /var/lib/filebeat-datatype: DirectoryOrCreate

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:name: filebeat-confignamespace: elklabels:k8s-app: filebeat
data:filebeat.yml: >-filebeat.inputs:- type: containerpaths:- /var/log/containers/*.logprocessors:- add_kubernetes_metadata:host: ${NODE_NAME}matchers:- logs_path:logs_path: "/var/log/containers/"# To enable hints based autodiscover, remove `filebeat.inputs` configurationand uncomment this:#filebeat.autodiscover:#  providers:#    - type: kubernetes#      node: ${NODE_NAME}#      hints.enabled: true#      hints.default_config:#        type: container#        paths:#          - /var/log/containers/*${data.kubernetes.container.id}.logprocessors:- add_cloud_metadata:- add_host_metadata:cloud.id: ${ELASTIC_CLOUD_ID}cloud.auth: ${ELASTIC_CLOUD_AUTH}output.elasticsearch:hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']username: ${ELASTICSEARCH_USERNAME}password: ${ELASTICSEARCH_PASSWORD}

RBAC

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: filebeat
subjects:- kind: ServiceAccountname: filebeatnamespace: elk
roleRef:kind: ClusterRolename: filebeatapiGroup: rbac.authorization.k8s.io---apiVersion: v1
kind: ServiceAccount
metadata:name: filebeatnamespace: elklabels:k8s-app: filebeat---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: filebeatlabels:k8s-app: filebeat
rules:- apiGroups:- ''resources:- namespaces- podsverbs:- get- watch- list

K8S部署Kibana

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:name: kbnamespace: elk
spec:replicas: 1selector:matchLabels:app: kbtemplate:metadata:name: kblabels:app: kbspec:containers:- name: kbimage: 'docker.elastic.co/kibana/kibana:7.17.5'env:- name: ELASTICSEARCH_HOSTSvalue: '["http://es-cluster:9200"]'ports:- name: httpcontainerPort: 5601

SVC

apiVersion: v1
kind: Service
metadata:name: kb-svcnamespace: elk
spec:type: NodePortselector:app: kbports:- port: 5601targetPort: 5601nodePort: 31065

K8S部署GitLab

sts

apiVersion: apps/v1
kind: StatefulSet
metadata:name: gitlabnamespace: my-space
spec:serviceName: gitlabreplicas: 1selector:matchLabels:app: gitlabtemplate:metadata:labels:app: gitlabspec:containers:- name: gitlabimage: 'gitlab/gitlab-ce:15.2.3-ce.0'ports:- containerPort: 80name: web

svc

apiVersion: v1
kind: Service
metadata:name: gitlab-svcnamespace: my-space
spec:type: NodePortselector:app: gitlabports:- port: 80targetPort: 80

获取密码

k exec gitlab-0 -n my-space -- cat /etc/gitlab/initial_root_passwordroot / C5b1sQhAfMQpHDqDsBBaB5S5BXXU95uP6u7Jj7Tfo2M=

K8S部署Prometheus + Grafana

下载

git clone https://github.com/prometheus-operator/kube-prometheus.git -b release-0.12

安装

cd /D/Workspace/K8S/learning-demo/22-prometheus-deploy/kube-prometheus# 安装crd和namespace
k apply --server-side -f manifests/setup# 创建其他资源
until k get servicemonitors -A ; do date; sleep 1; echo ""; done# find /D/Workspace/K8S/learning-demo/22-prometheus-deploy/kube-prometheus/manifests -name "*.yaml" |xargs perl -pi -e 's|registry.k8s.io|registry.cn-hangzhou.aliyuncs.com/google_containers|g'# 修改镜像地址
prometheusAdapter-deployment.yaml --> thinkingdata/prometheus-adapter:v0.10.0
kubeStateMetrics-deployment.yaml --> bitnami/kube-state-metrics:2.7.0k apply -f manifests

K8S部署Jenkins

ns

k create ns jenkins

pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: jenkins-pvc-claimnamespace: jenkins
spec:storageClassName: "nfs-storage"accessModes:- ReadWriteManyresources:requests:storage: 1Gi

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:name: jenkins-deploymentnamespace: jenkins
spec:replicas: 1selector:matchLabels:app: jenkinstemplate:metadata:labels:app: jenkinsspec:securityContext:fsGroup: 1000containers:- name: jenkinsimage: 'jenkins/jenkins:2.346.3-2-jdk8'imagePullPolicy: IfNotPresentports:- containerPort: 8080name: webprotocol: TCP- containerPort: 50000name: agentprotocol: TCPvolumeMounts:- name: jenkins-stroragemountPath: /var/jenkins_homevolumes:- name: jenkins-stroragepersistentVolumeClaim:claimName: jenkins-pvc-claim

svc

apiVersion: v1
kind: Service
metadata:name: jenkins-exportnamespace: jenkins
spec:selector:app: jenkinstype: NodePortports:- name: httpport: 8080targetPort: 8080nodePort: 32080

获取登录密码

# 方式一
k get po -n jenkinsk exec -it jenkins-deployment-6fb7cc656f-6mvrp -n jenkins -- cat /var/jenkins_home/secrets/initialAdminPassword
# 14095a60f7124247beeb3d1989a1b08c# 方式二
k logs -f -n jenkins jenkins-deployment-6fb7cc656f-6mvrp

rbac

创建sa,为其赋予特定的权限,在配置jenkins-slave的时候会用到。

apiVersion: v1
kind: ServiceAccount
metadata:name: jenkins2namespace: jenkins---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: jenkins2namespace: jenkins
rules:- apiGroups: [""]resources: ["pods"]verbs: ["create","delete","get","list","patch","update","watch"]- apiGroups: [""]resources: ["pods/exec"]verbs: ["create","delete","get","list","patch","update","watch"]- apiGroups: [""]resources: ["pods/log"]verbs: ["get","list","watch"]- apiGroups: [""]resources: ["secrets"]verbs: ["get"]- apiGroups: ["extensions", "apps"]resources: ["deployments"]verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: jenkins2namespace: jenkins
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: jenkins2
subjects:- kind: ServiceAccountname: jenkins2namespace: jenkins

安装Kubernetes插件

kubernets插件,它能够动态生成 Slave 的 Pod。

安装插件相对较慢,请耐心等待,并且由于是在线安装,集群需要开通外网。

注意:该插件目前只支持标准的K8S集群,k3s k0s等不支持。

Manage Jenkins–>Manage Plugins–>Available–>搜索Kubernetes

配置Kubernetes插件

使Jenkins能够连接到K8S集群,调用API 动态创建jenkins slave pod,执行构建任务。

Manage Jenkins–>Manage Nodes and Clouds–>Configure Clouds–>

如果是以root用户安装的K8S集群,kubeconfig文件所在路径是

/root/.kube/config

K8S部署RocketMQ Operator

golang的版本要求:1.16.x

cd /D/Workspace/K8S/learning-demo/24-rocketmq-operator/./01-git-clone.sh./02-deploy.sh

验证

# 部署集群 默认EmptyDir
k apply -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml# 访问30000
k apply -f example/rocketmq_v1alpha1_cluster_service.yaml

K8S部署Redis

ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:name: redis-config
data:redis-config: |maxmemory 2mbmaxmemory-policy allkeys-lrudir /dataappendonly yessave 900 1save 300 10save 60 10000

Pod

apiVersion: v1
kind: Pod
metadata:name: redislabels:app: redis
spec:containers:- name: redisimage: 'redis'command:- redis-server- /redis-conf/redis.confenv:- name: MASTERvalue: 'true'ports:- containerPort: 6379resources:limits:cpu: '0.1'volumeMounts:- mountPath: /dataname: data- mountPath: /redis-confname: configvolumes:- name: dataemptyDir: {}- name: configconfigMap:name: redis-configitems:- key: redis-configpath: redis.conf

Service

apiVersion: v1
kind: Service
metadata:name: redis-masterlabels:app: redis
spec:selector:app: redisports:- port: 6379targetPort: 6379

K8S部署Redis集群

Redis Proxy是一种用于搭建Redis集群的方式,它是基于Redis协议的代理层,将客户端的请求转发到多个Redis实例中,从而实现高可用和负载均衡。Redis Proxy的优点是不需要修改客户端代码,不需要对Redis实例进行改动,可以快速搭建Redis集群,并支持动态扩容和缩容。

下面是Redis Proxy方式搭建Redis集群的步骤:

  1. 安装Redis Proxy,可以选择开源的Twemproxy或商业的Redis Enterprise Proxy。
  2. 配置Redis Proxy,包括设置监听端口、Redis实例的地址和端口、权重等参数。
  3. 启动Redis Proxy,监听客户端请求,将请求转发到多个Redis实例中。
  4. 在Redis实例中设置主从复制,将主节点的数据同步到从节点。
  5. 使用Redis Cluster或其他工具对Redis实例进行水平扩容和缩容。

需要注意的是,使用Redis Proxy方式搭建Redis集群时,需要保证Redis实例的数据一致性和可靠性,可以使用Redis Sentinel或其他监控工具对Redis实例进行监控和管理。同时,需要考虑网络延迟、负载均衡等因素,以提高Redis集群的性能和可用性。

cm

#redis配置文件
apiVersion: v1
kind: ConfigMap
metadata:name: redis-conf
data:redis.conf: |port 6379masterauth 123456requirepass 123456appendonly yesdir /var/lib/rediscluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000---
#redis-proxy配置文件
apiVersion: v1
kind: ConfigMap
metadata:name: redis-proxy
data:proxy.conf: |cluster redis-cluster:6379bind 0.0.0.0port 7777threads 8daemonize noauth 123456enable-cross-slot yeslog-level error

sts

apiVersion: apps/v1
kind: StatefulSet
metadata:name: redis-nodeannotations:reloader.stakater.com/auto: "true"
spec:serviceName: redis-clusterreplicas: 6selector:matchLabels:app: redistemplate:metadata:labels:app: redisappCluster: redis-clusterspec:terminationGracePeriodSeconds: 20affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- redistopologyKey: kubernetes.io/hostnamecontainers:- name: redisimage: rediscommand:- "redis-server"args:- "/etc/redis/redis.conf"- "--protected-mode"- "no"- "--cluster-announce-ip"- "$(POD_IP)"env:- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIPresources:requests:cpu: "100m"memory: "100Mi"ports:- name: rediscontainerPort: 6379protocol: "TCP"- name: clustercontainerPort: 16379protocol: "TCP"volumeMounts:- name: "redis-conf"mountPath: "/etc/redis"- name: "redis-data"mountPath: "/var/lib/redis"volumes:- name: "redis-conf"configMap:name: "redis-conf"items:- key: "redis.conf"path: "redis.conf"volumeClaimTemplates:- metadata:name: redis-dataspec:accessModes: [ "ReadWriteMany","ReadWriteOnce"]resources:requests:storage: 1GstorageClassName: nfs-storage---
apiVersion: v1
kind: Service
metadata:name: redis-clusterlabels:app: redis
spec:ports:- name: redis-portport: 6379clusterIP: Noneselector:app: redisappCluster: redis-cluster

proxy

apiVersion: apps/v1
kind: Deployment
metadata:name: redis-proxyannotations:reloader.stakater.com/auto: "true"
spec:replicas: 1selector:matchLabels:app: redis-proxytemplate:metadata:labels:app: redis-proxyspec:imagePullSecrets:- name: harborcontainers:- name: redis-proxyimage: nuptaxin/redis-cluster-proxy:v1.0.0imagePullPolicy: IfNotPresentcommand: ["redis-cluster-proxy"]args: - -c- /data/proxy.conf # 指定启动配置文件ports:- name: redis-proxycontainerPort: 7777protocol: TCPvolumeMounts:- name: redis-proxy-confmountPath: /data/volumes:   - name: redis-proxy-confconfigMap:name: redis-proxy

svc

apiVersion: v1
kind: Service
metadata:name: redis-access-servicelabels:app: redis
spec:ports:- name: redis-portprotocol: "TCP"port: 6379targetPort: 6379type: NodePortselector:app: redisappCluster: redis-cluster---
#redis外网访问
apiVersion: v1
kind: Service
metadata:name: redis-proxy-servicelabels:name: redis-proxy
spec:type: NodePortports:- port: 7777protocol: TCPtargetPort: 7777name: httpnodePort: 30939selector:app: redis-proxy

K8S部署Kaniko

configmap

apiVersion: v1
kind: ConfigMap
metadata:name: dockerfile-storagenamespace: default
data:dockerfile: |FROM alpine RUN echo "created from standard input"

pod

apiVersion: v1
kind: Pod
metadata:name: kanikonamespace: default
spec:containers:- name: kanikoimage: kubebiz/kaniko:executor-v1.9.1args:- '--dockerfile=/workspace/dockerfile'- '--context=dir://workspace'- '--no-push'volumeMounts:- name: dockerfile-storagemountPath: /workspacerestartPolicy: Nevervolumes:- name: dockerfile-storageconfigMap:name: dockerfile-storage

K8S部署PostgreSQL

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: postgres-data
spec:storageClassName: "nfs-storage"accessModes:- ReadWriteManyresources:requests:storage: 1Gi

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:name: postgres-sonarlabels:app: postgres-sonar
spec:replicas: 1selector:matchLabels:app: postgres-sonartemplate:metadata:labels:app: postgres-sonarspec:containers:- name: postgres-sonarimage: 'postgres:11.4'imagePullPolicy: IfNotPresentports:- containerPort: 5432env:- name: POSTGRES_DBvalue: sonarDB- name: POSTGRES_USERvalue: sonarUser- name: POSTGRES_PASSWORDvalue: '123456'resources:limits:cpu: 1000mmemory: 2048Mirequests:cpu: 500mmemory: 1024MivolumeMounts:- name: datamountPath: /var/lib/postgresql/datavolumes:- name: datapersistentVolumeClaim:claimName: postgres-data

SVC

apiVersion: v1
kind: Service
metadata:name: postgres-sonarlabels:app: postgres-sonar
spec:clusterIP: Noneports:- port: 5432protocol: TCPtargetPort: 5432selector:app: postgres-sonar

测试

# 连接指定数据库
psql -U sonarUser -h localhost -p 5432 -d sonarDB# 查看所有数据库
\l

K8S部署SonarQube

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: sonarqube-data
spec:storageClassName: "nfs-storage"accessModes:- ReadWriteManyresources:requests:storage: 1Gi

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:name: sonarqubenamespace: my-spacelabels:app: sonarqube
spec:replicas: 1selector:matchLabels:app: sonarqubetemplate:metadata:labels:app: sonarqubespec:initContainers:               #设置初始化镜像,执行 system 命令- name: init-sysctlimage: busyboximagePullPolicy: IfNotPresentcommand: ["sysctl", "-w", "vm.max_map_count=262144"]  #必须设置vm.max_map_count这个值调整内存权限,否则启动可能报错securityContext:privileged: true          #赋予权限能执行系统命令containers:- name: sonarqubeimage: 'sonarqube:8.5.1-community'ports:- containerPort: 9000env:- name: SONARQUBE_JDBC_USERNAMEvalue: sonarUser- name: SONARQUBE_JDBC_PASSWORDvalue: '123456'- name: SONARQUBE_JDBC_URLvalue: 'jdbc:postgresql://postgres-sonar:5432/sonarDB'livenessProbe:httpGet:path: /sessions/newport: 9000initialDelaySeconds: 60periodSeconds: 30readinessProbe:httpGet:path: /sessions/newport: 9000initialDelaySeconds: 60periodSeconds: 30failureThreshold: 6volumeMounts:- mountPath: /opt/sonarqube/confname: datasubPath: conf- mountPath: /opt/sonarqube/dataname: datasubPath: data- mountPath: /opt/sonarqube/extensionsname: datasubPath: extensionsvolumes:- name: datapersistentVolumeClaim:claimName: sonarqube-data

SVC

apiVersion: v1
kind: Service
metadata:name: sonarqubenamespace: my-spacelabels:app: sonarqube
spec:type: NodePortports:- name: sonarqubeport: 9000targetPort: 9000protocol: TCPselector:app: sonarqube

K8S部署Minio

apiVersion: apps/v1
kind: Deployment
metadata:name: minio
spec:replicas: 1selector:matchLabels:tier: miniotemplate:metadata:labels:tier: miniospec:containers:- name: minioimage: 'bitnami/minio:2022.8.13-debian-11-r0'imagePullPolicy: IfNotPresentenv:- name: MINIO_ROOT_USERvalue: 'admin'- name: MINIO_ROOT_PASSWORDvalue: '123456'volumeMounts:- name: datamountPath: /datavolumes:- name: dataemptyDir: {}

K8S部署Nexus3

deploy

apiVersion: apps/v1
kind: Deployment
metadata:name: nexus3namespace: default
spec:selector:matchLabels:app: nexus3template:metadata:labels:app: nexus3spec:initContainers:- name: volume-mount-hackimage: busyboxcommand:- sh- '-c'- 'chown -R 200:200 /nexus-data'volumeMounts:- name: nexus-datamountPath: /nexus-datacontainers:- image: sonatype/nexus3:3.41.1name: nexus3ports:- containerPort: 8081name: nexus3volumeMounts:- name: nexus-datamountPath: /nexus-datavolumes:- name: nexus-dataemptyDir: {}

svc

kind: Service
apiVersion: v1
metadata:name: nexus3labels:app: nexus3
spec:type: NodePortselector:app: nexus3ports:- port: 8081targetPort: 8081

Docker部署centos7(带图形化界面)

部署

sudo docker run --rm -it --name kasmweb-centos7 --shm-size=512m -p 6901:6901 -e VNC_PW=123456 kasmweb/centos-7-desktop:1.12.0

VNC_PW 指定访问密码 123456

用户名默认是:kasm_user

https

【B站、西瓜抖音视频课件】Docker K8S教程相关推荐

  1. uniapp怎么调起摄像头拍视频_抖音视频怎么拍?我们总结了10个手机视频拍摄小技巧...

    抖音的很多功能与小咖秀类似,但不同的是,抖音用户可以通过视频拍摄的快慢.视频编辑和特效等技术让作品更具创造性,而不是简单地对嘴型. 抖音短视频的10个拍摄技巧,帮助你方便.快捷地制作出更加优质的短视频 ...

  2. 抖音视频消重软件 手机能改视频md5吗

             抖音视频消重软件 手机能改视频md5吗          当阿里.腾讯.京东为代表的互联网巨头不断加持新零售风口的时候,短视频作为他们手中的一枚重要棋子,无疑将会成为这些互联网巨头介 ...

  3. python自动搜索爬取下载文件-python批量爬取下载抖音视频

    本文实例为大家分享了python批量爬取下载抖音视频的具体代码,供大家参考,具体内容如下 import os import requests import re import sys import a ...

  4. python编程实例视屏-python 下载抖音视频示例源码

    [实例简介] 下载抖音视频 [实例截图] [核心代码] #code:utf-8 import requests from bs4 import BeautifulSoup import json se ...

  5. python爬取抖音用户数据_python批量爬取下载抖音视频

    本文实例为大家分享了python批量爬取下载抖音视频的具体代码,供大家参考,具体内容如下 import os import requests import re import sys import a ...

  6. c语言抓取抖音视频,【FiddlerScript】利用Fiddler中的FiddlerScript自动抓取抖音无水印视频并且自动保存...

    本帖最后由 小白大侠 于 2021-3-14 13:55 编辑 Fiddler自动抓取抖音无水印视频并且自动保存 前言:这段代码实用性不大,大量数据处理容易造成Fiddler卡死,只是希望给未来写Fi ...

  7. python3 def download_python3下载抖音视频

    python3下载抖音视频 发布时间:2019-05-30 11:12, 浏览次数:401 , 标签: python # -*- coding:utf-8 -*- from contextlib im ...

  8. rust爱拍视频解说_抖音视频制作必备Mac神器,上热门儿不是事儿

    在当下遍地都是网红的时代,抖音作为一个短视频发布平台,成就了众多普通人的网红梦. 还是那句老话,工欲善其事必先利其器,想要玩转抖音,手里没有几款称手的工具怎么行.借助这些工具可极大的提升吸粉效率,接下 ...

  9. 如何导出无水印_抖音视频怎么去水印 抖音怎么导出无水印视频

    去水印神速 去水印去水印短视频去水印小工具去水印解析去水印免费版去水印助手去水印软件去水印视频一键去水印快手去水印 随着短视频越来越火爆,玩抖音的人也越来越多.有人发就有人下载,有人下载就需要考虑水印 ...

最新文章

  1. kali下利用weeman进行网页钓鱼
  2. 初读设计模式-----《design pattern explained》读后感
  3. Android应用安全之Content Provider安全
  4. Cpp 对象模型探索 / 程序转化语义
  5. Lync Server 2010迁移至Lync Server 2013部署系列 Part1: 扩展AD架构
  6. .NET Core 2.1.5和.NET Core SDK 2.1.403发布
  7. 有头结点单链表的逆置
  8. C# DDOS攻击代码
  9. 生成验证码的java类_生成验证码的java类
  10. 在EXCEL中使用SQL语言对工作表进行操作
  11. 断面测量数据格式转换
  12. Windows中的工作组(Work Group)、域(Domain)、域控(DC)、活动目录(AD)介绍
  13. 什么是网络安全网格?
  14. APM-Skywalking调研及实施报告
  15. DITHER 抖动算法
  16. Jquery实现弹幕效果
  17. csharp基础练习题:寻找恩人【难度:1级】--景越C#经典编程题库,不同难度C#练习题,适合自学C#的新手进阶训练
  18. 德尔玛递交注册:半年营收15亿 小米与欧派是股东
  19. linux下 获取系统时间的相关函数
  20. 用好故事思维,轻松获得人心

热门文章

  1. ios 代码裁剪图片
  2. 实现“小学生算术题出题器”
  3. 国产蓝牙耳机华为和南卡哪个好用?半入耳耳机南卡和华为测评
  4. 1014 福尔摩斯的约会 (20 分) Python和C++版本
  5. LINUX SPI驱动笔记
  6. Linux 高可用(HA)集群之DRBD详解
  7. Activity取消注册InputChannel(十二)
  8. Python中正数、负数的取余运算
  9. ios 微信浏览器 预加载_小程序页面预加载技术
  10. uni-app实战之社区交友APP(6)动态页开发