github地址
一直在网上看k8s自定义资源这一块的内容,但是只停留于看,并没有真正的自己去实践一波,写这篇文章主要参考的是这篇博客,只是我对他做了一些简化,我只希望外部能够通过nodeip+port访问我的服务,并且对里面的资源进行统一生命周期管理。

1、使用kubebuilder初始化一个自定义资源

kubebuilder的安装请参考以前写的博客

1.进入gopath src 下新建一个文件夹,进入新建文件夹,生成自定以资源的相关文件,生成controller,type等,生成webhook相关的文件

[root@master src]# mkdir servicemanager
[root@master src]# cd servicemanager/
[root@master servicemanager]# kubebuilder init --domain servicemanager.io
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.5.0
Update go.mod:
$ go mod tidy
Running make:
$ make
/usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
Next: define a resource with:
$ kubebuilder create api
[root@master servicemanager]# kubebuilder create api --group servicemanager --version v1 --kind ServiceManager
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing scaffold for you to edit...
api/v1/servicemanager_types.go
controllers/servicemanager_controller.go
Running make:
$ make
/usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
[root@master servicemanager]# kubebuilder create webhook --group servicemanager --version v1 --kind ServiceManager --defaulting --programmatic-validation
Writing scaffold for you to edit...
api/v1/servicemanager_webhook.go

生成的,目录结构如下:

.
├── api
│   └── v1
│       ├── groupversion_info.go // GVK信息、scheme生成的方法都在这里
│       ├── servicemanager_types.go // 自定义CRD结构,需修改的文件
│       ├── servicemanager_webhook.go // webhook相关的文件
│       └── zz_generated.deepcopy.go // 深度拷贝
├── bin
│   └── manager // go打包文件的二进制文件
├── config // 所有最终生成的需要kubectl apply的的资源,按照功能进行分片成不同的目录,这里有些地方可以做些自定义的配置
│   ├── certmanager
│   │   ├── certificate.yaml
│   │   ├── kustomization.yaml
│   │   └── kustomizeconfig.yaml
│   ├── crd // crd的配置
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_servicemanagers.yaml
│   │       └── webhook_in_servicemanagers.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   ├── manager_webhook_patch.yaml
│   │   └── webhookcainjection_patch.yaml
│   ├── manager // manager的deployment在这里
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus // metric暴露
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac // rbac授权
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── servicemanager_editor_role.yaml
│   │   └── servicemanager_viewer_role.yaml
│   ├── samples // 简单的自定义资源yaml文件
│   │   └── servicemanager_v1_servicemanager.yaml
│   └── webhook // Unit webhook Service,用来接收APIServer转发而来的webhook请求
│       ├── kustomization.yaml
│       ├── kustomizeconfig.yaml
│       └── service.yaml
├── controllers
│   ├── servicemanager_controller.go // # CRD controller的核心逻辑在这里
│   └── suite_test.go
├── Dockerfile // 制作crd-controller镜像的Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── main.go // 程序入口
├── Makefile // make编译文件
└── PROJECT // 项目元数据

2.修改servicemanager_types.go文件

type ServiceManagerSpec struct {// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster// Important: Run "make" to regenerate code after modifying this file// Foo is an example field of ServiceManager. Edit ServiceManager_types.go to remove/update// Category 只有两种可能 deployment statefulset// 这个注释表示该字段的值只能是Deployment 或者 Statefulset// +kubebuilder:validation:Enum=Deployment;StatefulsetCategory string `json:"category,omitempty"`// 标签选择器Selector map[string]string `json:"selector,omitempty"`// 引用的statefulset deployment的templateTemplate corev1.PodTemplateSpec `json:"template,omitempty"`// 副本数 最大不超过10// +kubebuilder:validation:Maximum=10Replicas *int32 `json:"replicas,omitempty"`//端口号 端口号做大超过65535 服务端口号// +kubebuilder:validation:Maximum=65535Port *int32 `json:"port,omitempty"`// +kubebuilder:validation:Maximum=65535Targetport *int32 `json:"targetport,omitempty"`}// ServiceManagerStatus defines the observed state of ServiceManager
type ServiceManagerStatus struct {// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster// Important: Run "make" to regenerate code after modifying this fileReplicas int32 `json:"replicas,omitempty"`LastUpdateTime metav1.Time `json:"last_update_time,omitempty"`DeploymentStatus appsv1.DeploymentStatus `json:"deployment_status,omitempty"`ServiceStatus corev1.ServiceStatus `json:"service_status,omitempty"`
}
// 这里,Spec和Status均是ServiceManager的成员变量,Status并不像Pod.Status一样,是Pod的subResource.因此,
// 如果我们在controller的代码中调用到Status().Update(),会触发panic,
// 并报错:the server could not find the requested resource
// 如果我们想像k8s中的设计那样,那么就要遵循k8s中status subresource的使用规范:
// kubebuilder:subresource:status
// 用户只能指定一个CRD实例的spec部分;
// CRD实例的status部分由控制器进行变更。// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:subresource:scale:selectorpath=.spec.selector,specpath=.spec.replicas,statuspath=.status.replicas
// ServiceManager is the Schema for the servicemanagers API
type ServiceManager struct {metav1.TypeMeta   `json:",inline"`metav1.ObjectMeta `json:"metadata,omitempty"`Spec   ServiceManagerSpec   `json:"spec,omitempty"`Status ServiceManagerStatus `json:"status,omitempty"`
}// +kubebuilder:object:root=true// ServiceManagerList contains a list of ServiceManager
type ServiceManagerList struct {metav1.TypeMeta `json:",inline"`metav1.ListMeta `json:"metadata,omitempty"`Items           []ServiceManager `json:"items"`
}

// +kubebuilder:subresource:status 一定要加上,不然在更新资源状态的时候会报错资源找不到。

3、修改文件servicemanager_controller.go

定义一个interface 用于内部资源的创建,更新,资源是否已经存在,实例化资源

type OwnResource interface {// 获取内部资源的实体类MakeOwnResource(instance *servicemanagerv1.ServiceManager,logger logr.Logger,scheme *runtime.Scheme)(interface{}, error)// 校验资源是否存在OwnResourceExist(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger)(bool, interface{},error)// 获取内部资源的状态并修改自定资源的状态UpdateOwnerResources(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger) error// 发布内部资源ApplyOwnResource(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger,scheme *runtime.Scheme)error
}

结构体实现这4个接口,以service为例

type OwnService struct {Port *int32
}
// 获取内部资源的实体类
func (ownService *OwnService) MakeOwnResource(instance *ServiceManager,logger logr.Logger,scheme *runtime.Scheme) (interface{}, error){var label  = map[string]string{"app": instance.Name,}objectMeta := metav1.ObjectMeta{Name:instance.Name,Namespace:instance.Namespace,}servicePort := []corev1.ServicePort{corev1.ServicePort{TargetPort: intstr.IntOrString{intstr.Int,*instance.Spec.Targetport,""},NodePort:   *instance.Spec.Port,Port:*instance.Spec.Port,},}serviceSpec := corev1.ServiceSpec{Selector:label,Type:corev1.ServiceTypeNodePort,Ports:servicePort,}service := &corev1.Service{ObjectMeta: objectMeta,Spec:serviceSpec,}if err :=controllerutil.SetControllerReference(instance,service,scheme); err != nil{msg := fmt.Sprintf("set controllerReference for service %s/%s failed", instance.Namespace, instance.Name)logger.Error(err, msg)return nil, err}return service,nil
}// 校验资源是否存在
func (ownService *OwnService) OwnResourceExist(instance *ServiceManager,client client.Client,logger logr.Logger) (bool, interface{},error){service := &corev1.Service{}// 查看k8s集群中是否存在service资源if err := client.Get(context.Background(),types.NamespacedName{Name:instance.Name,Namespace:instance.Namespace},service); err != nil{return false,nil,err}return true,service,nil
}// 获取内部资源的状态并修改自定资源的状态
func (ownService *OwnService) UpdateOwnerResources(instance *ServiceManager,client client.Client,logger logr.Logger) error{service := &corev1.Service{}if err := client.Get(context.Background(),types.NamespacedName{Name:instance.Name,Namespace:instance.Namespace},service); err != nil{logger.Error(err,"service 资源不存在!")return err}instance.Status.LastUpdateTime = metav1.Now()instance.Status.ServiceStatus = service.Statusreturn nil
}// 发布内部资源
func (ownService *OwnService) ApplyOwnResource(instance *ServiceManager,client client.Client,logger logr.Logger,scheme *runtime.Scheme) error{// 首先查看资源是否存在exsit,found,err := ownService.OwnResourceExist(instance,client,logger)if err != nil {logger.Error(err,"service 资源不存在!")// return err}service,err := ownService.MakeOwnResource(instance,logger,scheme)newService,ok := service.(*corev1.Service)if !ok {logger.Error(err,"service 结构体转化失败!")return err}if err != nil {logger.Error(err,"获取service资源失败!")return err}if exsit {// 更新founService,ok := found.(*corev1.Service)if  ! ok{logger.Error(err,"service 结构体转化失败!")return err}// 这里有个坑,svc在创建前可能未指定clusterIP,那么svc创建后,// 会自动指定clusterIP并修改spec.clusterIP字段,// 因此这里要补上。SessionAffinity同理newService.Spec.ClusterIP = founService.Spec.ClusterIPnewService.Spec.SessionAffinity = founService.Spec.SessionAffinitynewService.ObjectMeta.ResourceVersion = founService.ObjectMeta.ResourceVersion// 如果不等,则更新Service资源if founService != nil && !reflect.DeepEqual(founService.Spec,newService.Spec) {err := client.Update(context.Background(),newService)if err != nil {logger.Error(err,"service更新失败!")return err}}}else{// 创建err := client.Create(context.Background(),newService)if err != nil {logger.Error(err,"service 创建失败!")return err}}return nil}

修改controller包中的方法,这个方式是真正调谐部分

// +kubebuilder:rbac:groups=servicemanager.servicemanager.io,resources=servicemanagers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=servicemanager.servicemanager.io,resources=servicemanagers/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=statefulsets,verbs=get;update;patch;delete
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=services,verbs=get;update;patch;delete
func (r *ServiceManagerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {ctx := context.Background()logger := r.Log.WithValues("servicemanager", req.NamespacedName)serviceManager := &servicemanagerv1.ServiceManager{}if err := r.Get(ctx,req.NamespacedName,serviceManager); err != nil{logger.Error(err,"获取serviceManager失败!")return ctrl.Result{}, err}// 如果存在,获取own资源ownResources,err := r.getOwnResource(serviceManager)if err != nil {logger.Error(err,"获取ownResource失败!")}var success = truefor _,ownResource := range ownResources {// 发布或者更新子资源if err := ownResource.ApplyOwnResource(serviceManager,r.Client,logger,r.Scheme); err != nil{success = false}}// 获取更新内置资源的状态,并且修改自定义资源的crdnewServiceManager := serviceManager.DeepCopy()for _,ownResource := range ownResources {// 发布或者更新子资源if err := ownResource.UpdateOwnerResources(newServiceManager,r.Client,logger); err != nil{success = false}}// 更新newServiceManagerif newServiceManager != nil && !reflect.DeepEqual(serviceManager.Status,newServiceManager.Status) {if err := r.Status().Update(ctx,newServiceManager); err != nil{// 这里不处理r.Log.Error(err, "unable to update Unit status")}}if !success{// 调谐失败logger.Info("更新内置资源失败,将监听资源再次放入到workqueue里")return ctrl.Result{},err}else{logger.Info("更新内置资源成功!")return ctrl.Result{},nil}return ctrl.Result{}, nil
}
func (r *ServiceManagerReconciler) getOwnResource(instance *servicemanagerv1.ServiceManager) ([]OwnResource, error) {var ownResources []OwnResourceif instance.Spec.Category == "Deployment" {ownDeployment := &servicemanagerv1.OwnDeployment{Category:instance.Spec.Category,}ownResources = append(ownResources, ownDeployment)} else {// statefulset留着后面写/*ownStatefulSet := &servicemanagerv1.OwnStatefulSet{Spec: appsv1.StatefulSetSpec{Replicas:    instance.Spec.Replicas,Selector:    instance.Spec.Selector,Template:    instance.Spec.Template,ServiceName: instance.Name,},}ownResources = append(ownResources, ownStatefulSet)*/}if instance.Spec.Port != nil {ownService := &servicemanagerv1.OwnService{Port:instance.Spec.Port,}ownResources = append(ownResources, ownService)}return ownResources,nil}

4、修改servicemanager_webhook.go文件,这个文件主要做的就是请求k8sapi的时候进行已成拦截 可以做的时 修改结构体 添加校验

/*Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/package v1import ("k8s.io/apimachinery/pkg/runtime"ctrl "sigs.k8s.io/controller-runtime"logf "sigs.k8s.io/controller-runtime/pkg/log""sigs.k8s.io/controller-runtime/pkg/webhook"
)// log is for logging in this package.
var servicemanagerlog = logf.Log.WithName("servicemanager-resource")func (r *ServiceManager) SetupWebhookWithManager(mgr ctrl.Manager) error {return ctrl.NewWebhookManagedBy(mgr).For(r).Complete()
}// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!// +kubebuilder:webhook:path=/mutate-servicemanager-servicemanager-io-v1-servicemanager,mutating=true,failurePolicy=fail,groups=servicemanager.servicemanager.io,resources=servicemanagers,verbs=create;update,versions=v1,name=mservicemanager.kb.iovar _ webhook.Defaulter = &ServiceManager{}// Default implements webhook.Defaulter so a webhook will be registered for the type
// 这个方法时可以修改结构体 比如添加一些默认参数什么的
func (r *ServiceManager) Default() {servicemanagerlog.Info("default", "name", r.Name)// TODO(user): fill in your defaulting logic.
}// TODO(user): change verbs to "verbs=create;update;delete" if you want to enable deletion validation.
// +kubebuilder:webhook:verbs=create;update,path=/validate-servicemanager-servicemanager-io-v1-servicemanager,mutating=false,failurePolicy=fail,groups=servicemanager.servicemanager.io,resources=servicemanagers,versions=v1,name=vservicemanager.kb.iovar _ webhook.Validator = &ServiceManager{}// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
// 下面的方法主要时做校验使用
func (r *ServiceManager) ValidateCreate() error {servicemanagerlog.Info("validate create", "name", r.Name)// TODO(user): fill in your validation logic upon object creation.return nil
}// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *ServiceManager) ValidateUpdate(old runtime.Object) error {servicemanagerlog.Info("validate update", "name", r.Name)// TODO(user): fill in your validation logic upon object update.return nil
}// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *ServiceManager) ValidateDelete() error {servicemanagerlog.Info("validate delete", "name", r.Name)// TODO(user): fill in your validation logic upon object deletion.return nil
}

这里可能会设计到的概念 webhook、finalizer有兴趣的可以自行百度
kubebuilder的一些注解都有什么含义 这个看官网上面都写的很清楚

自定义crd基本写完,我们怎么在本地运行呢
修改config/default/kustomization.yaml文件
将红框内的注释放开


修改config/crd/kustomization.yaml文件
根据阅读注释的描述,把下图圈中的部分注释打开:

修改Makefile文件中的指令deploy

替换为自己的registry

export IMAGE="my.registry.com:5000/unit-controller:tmp"
make deploy IMG=${IMAGE}

最终生成一个all_in_one.yaml文件,这个文件有六千多行
1、需要把yaml文件中CustomResourceDefinition.spec下新增一个字段:preserveUnknownFields: false

2、MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration
这两个webhook配置需要修改什么呢?来看看下载的配置,以为例:MutatingWebhookConfiguration

下面copy的是博客博客,这里面讲的很详细
这里面有两个地方要修改:

caBundle现在是空的,需要补上
clientConfig现在的配置是ca授权给的是Service unit-webhook-service,也即是会转发到deployment的pod,但我们现在是要本地调试,这里就要改成本地环境。
下面来讲述如何配置这两个点。

CA证书签发
这里要分为多个步骤:

1.ca.cert
首先获取K8s CA的CA.cert文件:

kubectl config view --raw -o json | jq -r '.clusters[0].cluster."certificate-authority-data"' | tr -d '"' > ca.cert

ca.cert的内容,即可复制替换到上面的MutatingWebhookConfiguration和ValidatingWebhookConfigurationd的webhooks.clientConfig.caBundle里。(原来的Cg==要删掉.)

2.csr
创建证书签署请求json配置文件:

注意,hosts里面填写两种内容:

Unit controller的service 在K8s中的域名,最后Unit controller是要放在K8s里运行的。
本地开发机的某个网卡IP地址,这个地址用来连接K8s集群进行调试。因此必须保证这个IP与K8s集群可以互通

cat > unit-csr.json << EOF
{"hosts": ["unit-webhook-service.default.svc","unit-webhook-service.default.svc.cluster.local","192.168.254.1"],"CN": "unit-webhook-service","key": {"algo": "rsa","size": 2048}
}
EOF

3.生成csr和pem私钥文件:

[root@vm254011 unit]# cat unit-csr.json | cfssl genkey - | cfssljson -bare unit
2020/05/23 17:44:39 [INFO] generate received request
2020/05/23 17:44:39 [INFO] received CSR
2020/05/23 17:44:39 [INFO] generating key: rsa-2048
2020/05/23 17:44:39 [INFO] encoded CSR
[root@vm254011 unit]#
[root@vm254011 unit]# ls unit*
unit.csr  unit-csr.json  unit-key.pem

4.创建CertificateSigningRequest资源

cat > csr.yaml << EOF
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:name: unit
spec:request: $(cat unit.csr | base64 | tr -d '\n')usages:- digital signature- key encipherment- server auth
EOF
# apply
kubectl apply -f csr.yaml

5.向集群提交此CertificateSigningRequest.
查看状态:

[root@vm254011 unit]# kubectl apply -f csr.yaml
certificatesigningrequest.certificates.k8s.io/unit created
[root@vm254011 unit]# kubectl describe csr unit
Name:         unit
Labels:       <none>
...
CreationTimestamp:  Sat, 23 May 2020 17:56:14 +0800
Requesting User:    kubernetes-admin
Status:             Pending
Subject:Common Name:    unit-webhook-serviceSerial Number:
Subject Alternative Names:DNS Names:     unit-webhook-service.default.svcunit-webhook-service.default.svc.cluster.localIP Addresses:  192.168.254.1
Events:  <none>

可以看到它还是pending的状态,需要同意一下请求:

[root@vm254011 unit]# kubectl certificate approve unit
certificatesigningrequest.certificates.k8s.io/unit approved
[root@vm254011 unit]#
[root@vm254011 unit]# kubectl get csr unit
NAME   AGE    REQUESTOR          CONDITION
unit   111s   kubernetes-admin   Approved,Issued
# 保存客户端crt文件
[root@vm254011 unit]# kubectl get csr unit -o jsonpath='{.status.certificate}' | base64 --decode > unit.crt

可以看到,现在已经签署完毕了。

汇总一下:

第1步生成的ca.cert文件给caBundle字段使用
第3步生成的unit-key.pem私钥文件和第5步生成的unit.crt文件,提供给客户端(unit controller)https服务使用
更新WebhookConfiguration
根据上面生成的证书相关内容,对all_in_one.yaml 中的WebhookConfiguration进行替换,替换之后:

apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:creationTimestamp: nullname: unit-mutating-webhook-configuration
webhooks:
- clientConfig:caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhNakEzTkRNeE0xb1hEVE13TURVeE1EQTNORE14TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5CCmRvZVRHNTlYMkZsYXRoN1RhRnYrZ2hjbGxsV0NLbkxuT1hQLzZydE0wdE92U0RCQjV2UVJsNUF0L3BWMEJucmQKZGtyOWRnMWRKSHp1T05WamkxTml6QVdUbWtSbDBKczMrdjFMUzBCY2xLeU5XbWRQM0NNUWl2M1BDbjNISG9rcgoveDZncnFaa3RxeUo2ck5JMXFocmkzbjNLSWFQWFBtYUJIeW1zWCt1UjQyMk1kaGNhU3dBUDQwUktzcUtWcS81CkRodzdHdVZzdFZHNG5GZUZ2dlFuYU1jVm13WUpyellFQWxNRitlSyswM3IyWEFLQUZxQnBEWXBaZlg1Wi9tUEsKVXlxNlIwcEJUaG9adXlwSUhQekwwMkJGazlDbmU3eTBXd1d6L1VleDJSN2toOVJhendNeVVTNlJKYU4wT2hRaQpsTTZyM2lZcnIzVWIxSW1ieE5NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFENHVNaVZpL28zSkVhVi9UZzVKRWhQK2tQZm8KVzBLeUtaT3FNVlZzRVZsM1l2aFdYdGxOaCtwT0ZHSTlPQVFZdE5NKzZDeEJLVm9Xd1NzSUpyYkpZeVR2bGFlYgpHZnJGZWRkL2NkM0N5M2N1UDQ0ZjRPQ3VabTZWckJUVy8wUms3LzVKMHlLTmlSSDVqelRJL0szZGtKWkNERktOCjRGdWZxZ3Y0QTNxdVYwQXJaNFNOV2poVEx2SlM1VVdaOUpxUndyU3NqNlpvenRJRVhiU1d2aWhyS2FGQmtoWWwKRG5KM2N4cFljYXJ0aVZqS1g3SUNQQTJxdmw1azF4ZEMwVldTQWlLdTVFR24zZkFmdkQwN2poeVBub3lkMjVmWApQeDlkaGlzaDgwaFl4Nm9pbHpHdUppMGZDNjgxZ0VRRTQzUGhNRHRCZHNKMTBEejRQYTdrL2QvY3hETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=url: https://192.168.254.1:9443/mutate-custom-my-crd-com-v1-unit
#    service:
#      name: unit-webhook-service
#      namespace: default
#      path: /mutate-custom-my-crd-com-v1-unitfailurePolicy: Failname: munit.kb.iorules:- apiGroups:- custom.my.crd.comapiVersions:- v1operations:- CREATE- UPDATEresources:- units
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:creationTimestamp: nullname: unit-validating-webhook-configuration
webhooks:
- clientConfig:caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhNakEzTkRNeE0xb1hEVE13TURVeE1EQTNORE14TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5CCmRvZVRHNTlYMkZsYXRoN1RhRnYrZ2hjbGxsV0NLbkxuT1hQLzZydE0wdE92U0RCQjV2UVJsNUF0L3BWMEJucmQKZGtyOWRnMWRKSHp1T05WamkxTml6QVdUbWtSbDBKczMrdjFMUzBCY2xLeU5XbWRQM0NNUWl2M1BDbjNISG9rcgoveDZncnFaa3RxeUo2ck5JMXFocmkzbjNLSWFQWFBtYUJIeW1zWCt1UjQyMk1kaGNhU3dBUDQwUktzcUtWcS81CkRodzdHdVZzdFZHNG5GZUZ2dlFuYU1jVm13WUpyellFQWxNRitlSyswM3IyWEFLQUZxQnBEWXBaZlg1Wi9tUEsKVXlxNlIwcEJUaG9adXlwSUhQekwwMkJGazlDbmU3eTBXd1d6L1VleDJSN2toOVJhendNeVVTNlJKYU4wT2hRaQpsTTZyM2lZcnIzVWIxSW1ieE5NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFENHVNaVZpL28zSkVhVi9UZzVKRWhQK2tQZm8KVzBLeUtaT3FNVlZzRVZsM1l2aFdYdGxOaCtwT0ZHSTlPQVFZdE5NKzZDeEJLVm9Xd1NzSUpyYkpZeVR2bGFlYgpHZnJGZWRkL2NkM0N5M2N1UDQ0ZjRPQ3VabTZWckJUVy8wUms3LzVKMHlLTmlSSDVqelRJL0szZGtKWkNERktOCjRGdWZxZ3Y0QTNxdVYwQXJaNFNOV2poVEx2SlM1VVdaOUpxUndyU3NqNlpvenRJRVhiU1d2aWhyS2FGQmtoWWwKRG5KM2N4cFljYXJ0aVZqS1g3SUNQQTJxdmw1azF4ZEMwVldTQWlLdTVFR24zZkFmdkQwN2poeVBub3lkMjVmWApQeDlkaGlzaDgwaFl4Nm9pbHpHdUppMGZDNjgxZ0VRRTQzUGhNRHRCZHNKMTBEejRQYTdrL2QvY3hETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=url: https://192.168.254.1:9443/validate-custom-my-crd-com-v1-unit
#    service:
#      name: unit-webhook-service
#      namespace: default
#      path: /validate-custom-my-crd-com-v1-unitfailurePolicy: Failname: vunit.kb.iorules:- apiGroups:- custom.my.crd.comapiVersions:- v1operations:- CREATE- UPDATEresources:- units

主意,url中的ip地址需要是本地开发机的ip地址,同时此ip需要能与K8s集群正常通信,uri为service.path.

修改完两个WebhookConfiguration之后,下一步就可以去部署all_in_one.yaml文件了,由于现在controller要在本地运行调试,因此,这个阶段,要记得把all_in_one.yaml中的Deployment资源部分注释掉。

[root@vm254011 unit]# kubectl apply -f all_in_one.local.yaml  --validate=falsenamespace/unit-system created
customresourcedefinition.apiextensions.k8s.io/units.custom.my.crd.com created
mutatingwebhookconfiguration.admissionregistration.k8s.io/unit-mutating-webhook-configuration created
role.rbac.authorization.k8s.io/unit-leader-election-role created
clusterrole.rbac.authorization.k8s.io/unit-manager-role created
clusterrole.rbac.authorization.k8s.io/unit-proxy-role created
clusterrole.rbac.authorization.k8s.io/unit-metrics-reader created
rolebinding.rbac.authorization.k8s.io/unit-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/unit-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/unit-proxy-rolebinding created
service/unit-controller-manager-metrics-service created
service/unit-webhook-service created
validatingwebhookconfiguration.admissionregistration.k8s.io/unit-validating-webhook-configuration created

K8s这边的CRD资源、webhook资源、RBAC授权都已经搞定了,下一步就是启动本地的controller进行调试了。

新建一个自定义资源ServiceManager

apiVersion: servicemanager.servicemanager.io/v1
kind: ServiceManager
metadata:name: servicemanager-sample
spec:# Add fields herecategory: Deployment#selector:#app: servicemanager-samplereplicas: 2port: 30027 #nodeport 和 serviceporttargetport: 80 #container porttemplate:metadata:name: servicemanager-samplespec:containers:- image: nginximagePullPolicy: IfNotPresentname: servicemanager-sampleresources:limits:cpu: 110mmemory: 256Mirequests:cpu: 100mmemory: 128Mi

通过nodeip + port 能正常访问

kubebuilder自定义资源相关推荐

  1. operator-sdk实战开发K8S CRD自定义资源对象

    环境说明 系统:CentOS Linux release 7.6.1810 (Core) golang:v1.15 operator-sdk:v1.7.0 docker:v1.19 # 因为 oper ...

  2. k8s操作自定义资源

    如何操作自定义资源? client-go为每种K8S内置资源提供对应的clientset和informer.那如果我们要监听和操作⾃定义资源对象,应该如何做呢?这⾥我们有两种⽅式: ⽅式⼀: 使⽤cl ...

  3. 通过自定义资源扩展Kubernetes

    原文链接:通过自定义资源扩展Kubernetes 转载于:https://www.cnblogs.com/wangjq19920210/p/11555996.html

  4. VB将自定义资源中的文件释放出来

    程序代码: Option Explicit '************************************************************************* '** ...

  5. Unity使用自定义资源(.asset)配置数据

    本文原创版权归 强哥的私房菜 所有,此处为转载,如有再转,请于篇首位置标明原创作者及出处,以示尊重! 作者:强哥的私房菜 原文:http://blog.csdn.net/liqiangeastsun/ ...

  6. Crd(自定义资源类型)2021.12.05

    目录 文章目录 目录 实验环境 实验软件 1.什么是CRD 2.CRD的定义 3.Controller 4.Operator 5.参考文档 关于我 最后 实验环境 实验环境: 1.win10,vmwr ...

  7. k8s自定义资源CRD

    一.概述 在K8S系统扩展点中,开发者可以通过CRD(CustomResourceDefinition)来扩展K8SAPI,其功能主要由APIExtensionServer负责.使用kubernete ...

  8. VC中使用自定义资源

    前言 在VC环境中除了我们所常用的Dialog.Menu和Bitmap等标准资源类型之外,它还支持自定义资源类型(Custom Resource),我们自定义资源类型能做些什么呢?呵呵,用处多多. 1 ...

  9. Multipart自定义资源限制文件大小限制设计——aop切面怎么才能切入Multipart的文件大小拦截?

    Multipart自定义资源限制文件大小限制设计--aop切面怎么才能切入Multipart的文件大小拦截? author:陈镇坤27 创建时间:2022年1月23日 创作不易,转载请注明来源 摘要: ...

最新文章

  1. 【hdu 6342】Expression in Memories
  2. r语言安装ipsolve_数值分析的R语言实现(插值部分)
  3. 数据结构——树的理解路线(总)
  4. python网页登录验证码不显示_进网页需要验证码?不好意思,Python从来不惧各种验证码!...
  5. LeetCode算法入门- Longest Common Prefix -day13
  6. php exec执行多条命令,小技巧:在PHP中调用多条shell指令
  7. 前端开发中JS调试技巧,你知道几种?用过几种?
  8. 【PHP学习】—利用ajax原理实现登录功能(八)
  9. spark加载数据的方式
  10. 逆向路由器固件之敏感信息泄露 Part2
  11. python做副业_学习Python可以做哪些副业,你是不是感觉自己错过了一个亿?
  12. 操作系统学习笔记 002 安装NASM
  13. zen-Coding
  14. 如何设计出一款好的软件
  15. Android自定义View使用总结
  16. 学测绘和计算机,测绘工程就业方向与前景 女生学测绘好找工作吗
  17. MFC基础知识与课程设计思路
  18. 052试题 86 - crosscheck 命令及expried
  19. python循环队列_JS 队列-优先队列、循环队列
  20. linux 可视化分区,可视化linux块设备的工具(分区,LVM PV,LV,mdadm设备……)

热门文章

  1. 短单词汇总(2-4个字母)
  2. golang-命令源码文件
  3. 智慧树源码_公众号题库源码
  4. 疾病研究:DMD及BMD的机理和临床表现(译稿)
  5. python中xpath使用案例_python爬虫学习笔记:XPath语法和使用示例
  6. 985高校博士因文言文致谢走红!导师评价其不仅SCI写得好...
  7. web应用界面设计规范(1)_软件测试资料大全
  8. 达梦数据库管理工具使用
  9. 现代家用计算机有哪些名称,世界上第一台现代电子计算机是什么?
  10. 云学智慧校园-高校信息化一体化平台 V2.0 SE-WBS排期表(初拟)