集群scan_扫描k8s集群中的漏洞
Kubei是一个漏洞扫描和CIS Docker基准测试工具,它能够对kubernetes集群进行准确,即时的风险评估。Kubei扫描Kubernetes集群中正在使用的所有图像,包括应用程序Pod和系统Pod的镜像。它不会扫描整个镜像注册表,也不需要与CI / CD进行初步集成。
它是一种可配置的工具,可以定义扫描范围(目标名称空间),速度和关注的漏洞级别,还提供了图形用户界面。
安装要求:
Kubernetes集群已准备就绪,并且已~/.kube/config
为目标集群正确配置了kubeconfig
需要的权限
- 在集群范围内读取secret。这是用于获取私有镜像仓库的镜像的拉取凭据。
- 列出集群范围内的Pod。
- 在集群范围内创建job。创建job将扫描其名称空间中的目标容器
部署
---apiVersion: v1kind: Namespacemetadata: name: kubei---apiVersion: v1kind: ServiceAccountmetadata: name: kubei namespace: kubei---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: kubeirules:- apiGroups: [""] resources: ["secrets"] verbs: ["get"]- apiGroups: [""] resources: ["pods"] verbs: ["list"]- apiGroups: ["batch"] resources: ["jobs"] verbs: ["create","delete"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubeisubjects:- kind: ServiceAccount name: kubei namespace: kubeiroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubei---apiVersion: v1kind: Servicemetadata: name: clair namespace: kubei labels: app: clairspec: type: ClusterIP ports: - port: 6060 protocol: TCP name: apiserver - port: 6061 protocol: TCP name: health selector: app: clair---apiVersion: apps/v1kind: Deploymentmetadata: name: clair namespace: kubei labels: app: clairspec: replicas: 1 selector: matchLabels: app: clair template: metadata: labels: app: clair kubeiShouldScan: "false" spec: initContainers: - name: check-db-ready image: postgres:12-alpine command: ['sh', '-c', 'until pg_isready -h postgres -p 5432; do echo waiting for database; sleep 2; done;'] containers: - name: clair image: gcr.io/portshift-release/clair/clair-local-scan imagePullPolicy: Always ports: - containerPort: 6060 - containerPort: 6061 resources: limits: cpu: 2000m memory: 6G requests: cpu: 700m memory: 3G# uncomment the following lines in case Kubei is running behind proxy. update PROXY_IP & PROXY_PORT according to your proxy settings# env:# - name: HTTPS_PROXY# value: "{ PROXY_IP }:{ PROXY_PORT }"# - name: HTTP_PROXY# value: "{ PROXY_IP }:{ PROXY_PORT }"# - name: NO_PROXY# value: "postgres"---apiVersion: v1kind: Servicemetadata: labels: app: postgres name: postgres namespace: kubeispec: type: ClusterIP ports: - name: db port: 5432 protocol: TCP selector: app: postgres---apiVersion: apps/v1kind: Deploymentmetadata: name: clair-postgres namespace: kubei labels: app: postgresspec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres kubeiShouldScan: "false" spec: containers: - name: clair-db image: gcr.io/portshift-release/clair/clair-db imagePullPolicy: Always ports: - containerPort: 5432 resources: limits: cpu: 1500m memory: 1G requests: cpu: 500m memory: 400Mi---apiVersion: v1kind: Servicemetadata: namespace: kubei name: kubei labels: app: kubeispec: type: ClusterIP ports: - port: 8080 protocol: TCP name: http-webapp - port: 8081 protocol: TCP name: http-klar-result selector: app: kubei---apiVersion: apps/v1kind: Deploymentmetadata: name: kubei namespace: kubei labels: app: kubeispec: replicas: 1 selector: matchLabels: app: kubei template: metadata: labels: app: kubei kubeiShouldScan: "false" spec: serviceAccountName: kubei initContainers: - name: init-clairsvc image: yauritux/busybox-curl args: - /bin/sh - -c - > set -x; while [ $(curl -sw '%{http_code}' "http://clair.kubei:6060/v1/namespaces" -o /dev/null) -ne 200 ]; do echo "waiting for clair to be ready"; sleep 15; done containers: - name: kubei image: gcr.io/development-infra-208909/kubei:1.0.8 imagePullPolicy: Always env: - name: "KLAR_IMAGE_NAME" value: "gcr.io/development-infra-208909/klar:1.0.7" - name: "DOCKLE_IMAGE_NAME" value: "gcr.io/development-infra-208909/dockle:1.0.0" - name: "MAX_PARALLELISM" # max number of scans that will run simultaneously. defaults to 10 value: "10" - name: "TARGET_NAMESPACE" # empty = scan all namespaces value: "" - name: "SEVERITY_THRESHOLD" # minimum level of vulnerability to report. defaults to MEDIUM value: "MEDIUM" - name: "IGNORE_NAMESPACES" # a list of namespaces to ignore. defaults no namespace to ignore value: "istio-system,kube-system" - name: "DELETE_JOB_POLICY" value: "Successful" - name: "SCANNER_SERVICE_ACCOUNT" # Defaults to 'default' service account value: "" - name: "REGISTRY_INSECURE" # Allow scanner to access insecure registries (HTTP only). Default is `false`. value: "false"# uncomment the following lines in case Kubei is running behind proxy. update PROXY_IP & PROXY_PORT according to your proxy settings# - name: SCANNER_HTTPS_PROXY# value: "{ PROXY_IP }:{ PROXY_PORT }"# - name: SCANNER_HTTP_PROXY# value: "{ PROXY_IP }:{ PROXY_PORT }" ports: - containerPort: 8080 - containerPort: 8081 resources: limits: cpu: 100m memory: 100Mi requests: cpu: 10m memory: 20Mi
这里有部分镜像因为特殊原因无法拉取,这里我已经下载并上传到dockerhub,对应如下,替换即可。
gcr.io/portshift-release/clair/clair-local-scan
--->misterli/clair-local-scan:v2.1.4.ubi.2
gcr.io/portshift-release/clair/clair-db
---> misterli/clair-db:12.2.ubi.2
gcr.io/development-infra-208909/kubei:1.0.8
---> misterli/kubei:1.0.8
gcr.io/development-infra-208909/klar:1.0.7
---> misterli/development-infra-208909-klar:1.0.7
gcr.io/development-infra-208909/dockle:1.0.0
---> misterli/evelopment-infra-208909-dockle:1.0.0
配置说明
1、设置扫描范围,设置变量IGNORE_NAMESPACES
以忽略特定的名称空间。设置变量TARGET_NAMESPACE
为扫描特定的名称空间,或保留为空以扫描所有名称空间
2、设置扫描速度,变量MAX_PARALLELISM
设置最大同时扫描数量
3、设置严重性级别阈值,SEVERITY_THRESHOLD
将报告严重性级别高于或等于阈值的漏洞。支持的水平Unknown
,Negligible
,Low
,Medium
,High
,Critical
,Defcon1
。默认值为Medium
4、设置删除作业策略,设置变量DELETE_JOB_POLICY
以定义是否删除已完成的扫描仪作业。支持的值为:
All
-所有作业将被删除。Successful
-仅成功的作业将被删除(默认)。Never
-作业将永远不会被删除
5、禁用CIS Docker基准测试,将变量SHOULD_SCAN_DOCKERFILE
设置为false
6、设置扫描仪服务帐户。将变量SCANNER_SERVICE_ACCOUNT
设置为扫描仪作业要使用的服务帐户名称。默认为default
服务帐户
7、扫描不安全的注册表。设置变量REGISTRY_INSECURE
,以允许扫描程序访问不安全的注册表(仅HTTP)。默认值为false
。
执行扫描
我们部署一个ingress用来查看dashboard
apiVersion: traefik.containo.us/v1alpha1kind: IngressRoutemetadata: name: kubei namespace: kubeispec: entryPoints: - web routes: - match: Host(`kubei.lishuai.fun`) kind: Rule services: - name: kubei port: 8080 middlewares: # 使用http重定向搭配https的中间件 - name: redirect-https namespace: default---apiVersion: traefik.containo.us/v1alpha1kind: IngressRoutemetadata: name: kubei-tls namespace: kubeispec: entryPoints: - websecure routes: - match: Host(`kubei.lishuai.fun`) kind: Rule services: - name: kubei port: 8080 tls: certResolver: myresolver
查看dashboard 如下:
![](/assets/blank.gif)
我们点击右上角的go
来执行扫描,点击后集群会为每个pod创建一个用来执行扫描的job。
[root@master-01 deploy]# kubectl get job --all-namespaces -w NAMESPACE NAME COMPLETIONS DURATION AGEkubernetes-dashboard scanner-dashboard-69f24894-91cc-4cf4-9dc8-d5c02dc7b7db 0/1 21m 21mlonghorn-system scanner-csi-attacher-31757dfc-f223-4ecc-878f-cd2ac2d41949 1/1 7s 17smonitoring scanner-configmap-reload-230805e3-b3c1-4c00-ac35-4e28b4c17c58 0/1 17s 17smonitoring scanner-prometheus-d6ca5856-424f-4019-8812-07f8cabae041 0/1 10s 10sversion-checker scanner-version-checker-93b0f404-41cd-4088-b082-8622bd25f200 0/1 0sversion-checker scanner-version-checker-93b0f404-41cd-4088-b082-8622bd25f200 0/1 0s 0smonitoring scanner-prometheus-d6ca5856-424f-4019-8812-07f8cabae041 0/1 13s 13s
稍等再查看dashboard就会出现扫描结果
![](/assets/blank.gif)
![](/assets/blank.gif)
集群scan_扫描k8s集群中的漏洞相关推荐
- Blazor+Dapr+K8s微服务之基于WSL安装K8s集群并部署微服务
前面文章已经演示过,将我们的示例微服务程序DaprTest1部署到k8s上并运行.当时用的k8s是Docker for desktop 自带的k8s,只要在Docker for desktop中启用 ...
- 使用Kubeadm创建k8s集群之部署规划(三十一)
前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...
- keepalive+nginx高可用K8S集群部署
1.准备工作 1.1集群部署规划 K8S集群角色 节点IP 节点名称 OS 控制节点 192.168.0.180 k8smaster1 Centos7.9 控制节点 192.168.0.181 k8s ...
- Addon SuperEdge 让原生 K8s 集群可管理边缘应用和节点
作者 梁豪,腾讯TEG工程师,云原生开源爱好者,SuperEdge 开发者,现负责TKEX-TEG容器平台运维相关工作. 王冬,腾讯云TKE后台研发工程师,专注容器云原生领域,SuperEdge 核心 ...
- K8S集群应用市场安装部署:第一篇
这里是引用 操作系统要求 服务器配置信息 基础环境部署 3.1. NTP时钟源同步 3.2. 关闭firewalld服务 3.3. 关闭SElinux服务 3.4. 系统调优配置 3.5. 开启IP转 ...
- Mac M2芯 超详细k8s集群实战 - kubeadm
概述 我们准备搭建kubeadm的master+worker集群,实现k8s集群,master.worker在虚拟机上来执行,中间遇到了超级多的坑,都搞定了之后,在这里系统的总结一下,这也是一篇学习笔 ...
- K8S(一)VMware Fusion 构建基本k8s集群
文章目录 背景 准备 网络配置 系统初始化 kubeadm部署k8s集群 harbor私有镜像仓库构建(optional) 功能验证 harbor 私有镜像仓库功能验证 k8s集群功能验证 背景 参考 ...
- 本地k8s集群搭建保姆级教程(4)-安装k8s集群Dashboard
安装k8s集群管理UI 1 Dashboard安装 1.1 参考文档 Dashboard 是基于网页的 Kubernetes 用户界面. 你可以使用 Dashboard 将容器应用部署到 Kubern ...
- RKE安装部署K8S集群、Rancher
服务器准备:三台虚拟机(master:1,node:2:这里选用的阿里云ECS) OS hostname 内网IP Centos7 joker-master-1 172.27.31.149 Cento ...
最新文章
- MySQL学习(二)复制
- 重量级锁的加锁的基本流程
- MySQL中varchar最大长度是多少
- 终于收到HacktoberFest的奖品啦
- 使用Java 8流遍历递归数据结构
- 基于物品的相似度还是基于用户的相似度
- Linux运维面试题之--网页打开缓慢如何优化
- Windows 软件安全---注入安全
- Tomcat部署项目的方法
- 一图总结:软件测试原则|策略|模型|生命周期
- SAS入门(一)理论篇
- 钽电容的命名,贴片电解电容耐压,封装
- 共享图书横空出世一本书看十天只需一块钱
- uplift model的理论与实践
- 京东2017实习生Java.md
- 愚人节的礼物(HDU1870)
- Android 使用 Scheme 启动淘宝,天猫等其他APP
- 报错:<ly-tab> - did you register the component correctly? For recursive components, make sure to provi
- 【数据挖掘】利用md5查找重复文件
- LVS+Keeplive 负载均衡
热门文章
- js控制鼠标移动_原生JS封装可拖拽效果
- python判断路径是文件还是文件夹_python之判断是否是目录或文件
- 使用synchronized修饰静态方法和非静态方法有什么区别
- linux下安装mysql初始化报错:bin/mysqld: error while loading shared libraries: libnuma.so.1
- php引用类失败,php – 致命错误:调用未定义的方法stdClass
- Jmeter连接mysql报Cannot create PoolableConnectionFactory (Communications link failureThe last packet
- Selenium3自动化测试——13.下载文件功能
- linux创建文件怎么输入换行_Revit 怎么创建自定义的族文件?
- json_decode的结果是null
- 【金三银四】java是世界上最好的语言