背景:
前置内容:Operator-1初识Operator,从pod开始简单创建operator……
创建Pod
RedisSpec 增加Image字段
恩 强调一下 我故意在api/v1/redis_type.go中加一个字段。接下来该怎么操作呢?
先发布了一下crd
[zhangpeng@zhangpeng kube-oprator1]$ ./bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
[zhangpeng@zhangpeng kube-oprator1]$ kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/redis.myapp1.zhangpeng.com configured
describe crd 配置里面已经有了Image字段!
[zhangpeng@zhangpeng kube-oprator1]$ kubectl describe crd redis.myapp1.zhangpeng.com
镜像要不要发布一下?
发布镜像的时候SetupWebhookWithManager打开了….不打开貌似发布了会报错 打包镜像image并发布
[zhangpeng@zhangpeng kube-oprator1]$ cd config/manager && kustomize edit set image controller=ccr.ccs.tencentyun.com/layatools/zpredis:v2
[zhangpeng@zhangpeng manager]$ cd ../../
[zhangpeng@zhangpeng kube-oprator1]$ kustomize build config/default | kubectl apply -f -
注:当然了能使用原生的命令更好, 我这系统还是有问题的
继续强调一下SetupWebhookWithManager
发布完成继续关了main.go中 SetupWebhookWithManager,否则没法make run本地调试哦。
创建CreateRedis函数
helper/help_redis.go
package helper
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
v1 "kube-oprator1/api/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func CreateRedis(client client.Client, redisConfig *v1.Redis) error {
newpod := &corev1.Pod{}
newpod.Name = podName
newpod.Namespace = redisConfig.Namespace
newpod.Spec.Containers = []corev1.Container{
{
Name: redisConfig.Name,
Image: redisConfig.Spec.Image,
ImagePullPolicy: corev1.PullIfNotPresent,
Ports: []corev1.ContainerPort{
{
ContainerPort: int32(redisConfig.Spec.Port),
},
},
},
}
return client.Create(context.Background(), newpod)
}
Reconcile调用CreateRedis函数
controlers/redis_controller.go
func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
redis := &myapp1v1.Redis{}
if err := r.Get(ctx, req.NamespacedName, redis); err != nil {
fmt.Println(err)
} else {
fmt.Println("object", redis)
err := helper.CreateRedis(r.Client, redis)
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
make run test
注:其实也可以不make run了……都发布到集群中了, 可以到集群中查看operator日志了! test/redis.go
apiVersion: myapp1.zhangpeng.com/v1
kind: Redis
metadata:
name: zhangpeng1
spec:
port: 6379
image: redis:latest
注意:特意搞了两个又创建了一个name为zhangpeng2的应用!
[zhangpeng@zhangpeng ~]$ kubectl get Redis
[zhangpeng@zhangpeng ~]$ kubectl get pods
创建成功!接着而来的是pod删除的问题:
[zhangpeng@zhangpeng ~]$ kubectl delete Redis zhangpeng2
[zhangpeng@zhangpeng ~]$ kubectl get pod
恩删除了zhangpeng2 Redis,but pod资源还存在,该怎么处理,让pod资源一起删除呢?
POD资源删除
参照:https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/ Finalizers
清理资源
清理一下资源删除webhook Redis 等相关资源
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get ValidatingWebhookConfiguration
NAME WEBHOOKS AGE
cert-manager-webhook 1 3d23h
kube-oprator1-validating-webhook-configuration 1 3d23h
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete ValidatingWebhookConfiguration
error: resource(s) were provided, but no name was specified
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete ValidatingWebhookConfiguration kube-oprator1-validating-webhook-configuration
validatingwebhookconfiguration.admissionregistration.k8s.io "kube-oprator1-validating-webhook-configuration" deleted
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get MutatingWebhookConfiguration
NAME WEBHOOKS AGE
cert-manager-webhook 1 3d23h
kube-oprator1-mutating-webhook-configuration 1 3d23h
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete MutatingWebhookConfiguration kube-oprator1-mutating-webhook-configuration
mutatingwebhookconfiguration.admissionregistration.k8s.io "kube-oprator1-mutating-webhook-configuration" deleted
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis
NAME AGE
zhangpeng3 5m
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete Redis zhangpeng3
redis.myapp1.zhangpeng.com "zhangpeng3" deleted
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete pods zhangpeng3
Error from server (NotFound): pods "zhangpeng3" not found
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get pods
No resources found in default namespace.
这样作吧,还是为了本地调试方便,否则一些本地的修改,增加字段就要重新打包,发布,太麻烦了。老老实实make run 本地调试模式,待调试完成后再发布应用到集群,这才是正常的流程! 需求: 删除Redis 对应pod 可以删除,pod可以多副本。zhangpeng-0 zhangpeng-1 zhangpeng-3的顺序。更新策略,缩容策略。恩这是我想到的!当然了pod如果使用deployment 或者statefulset会更简单一些。这里还是拿pod演示了!
RedisSpec增加副本数量字段
修改api/v1/redis_types.go中RedisSpec struct,增加如下配置:
//+kubebuilder:validation:Minimum:=1
//+kubebuilder:validation:Maximum:=5
Num int `json:"num,omitempty"`
num字段为pod副本数量字段,依着port配置把副本数量做了一个1-5的限制! 继续发布一下crd:
[zhangpeng@zhangpeng kube-oprator1]$ ./bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
[zhangpeng@zhangpeng kube-oprator1]$ kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/redis.myapp1.zhangpeng.com configured
[zhangpeng@zhangpeng kube-oprator1]$ kubectl describe crd redis.myapp1.zhangpeng.com
确认crd中产生对应字段:
help_redis.go
修改helper/help_redis.go如下:
package helper
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
v1 "kube-oprator1/api/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
//拼装Pod名称列表
func GetRedisPodNames(redisConfig *v1.Redis) []string {
podNames := make([]string, redisConfig.Spec.Num)
for i := 0; i < redisConfig.Spec.Num; i++ {
podNames[i] = fmt.Sprintf("%s-%d", redisConfig.Name, i)
}
fmt.Println("podnames:", podNames)
return podNames
}
//判断对应名称的pod在finalizer中是否存在
func IsExist(podName string, redis *v1.Redis) bool {
for _, pod := range redis.Finalizers {
if podName == pod {
return true
}
}
return false
}
//创建pod
func CreateRedis(client client.Client, redisConfig *v1.Redis, podName string) (string, error) {
if IsExist(podName, redisConfig) {
return "", nil
}
newpod := &corev1.Pod{}
newpod.Name = podName
newpod.Namespace = redisConfig.Namespace
newpod.Spec.Containers = []corev1.Container{
{
Name: podName,
Image: redisConfig.Spec.Image,
ImagePullPolicy: corev1.PullIfNotPresent,
Ports: []corev1.ContainerPort{
{
ContainerPort: int32(redisConfig.Spec.Port),
},
},
},
}
return podName, client.Create(context.Background(), newpod)
}
修改controllers/redis_controller.go:
import (
"context"
"fmt"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
myapp1v1 "kube-oprator1/api/v1"
"kube-oprator1/helper"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// TODO(user): your logic here
redis := &myapp1v1.Redis{}
if err := r.Client.Get(ctx, req.NamespacedName, redis); err != nil {
if errors.IsNotFound(err) {
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}
//取得需要创建的POD副本的名称
podNames := helper.GetRedisPodNames(redis)
fmt.Println("podNames,", podNames)
updateFlag := false
//删除POD,删除Kind:Redis过程中,会自动加上DeletionTimestamp字段,据此判断是否删除了自定义资源
if !redis.DeletionTimestamp.IsZero() {
fmt.Println(ctrl.Result{}, r.clearRedis(ctx, redis))
return ctrl.Result{}, r.clearRedis(ctx, redis)
}
//创建POD
for _, podName := range podNames {
finalizerPodName, err := helper.CreateRedis(r.Client, redis, podName)
if err != nil {
fmt.Println("create pod failue,", err)
return ctrl.Result{}, err
}
if finalizerPodName == "" {
continue
}
//若该pod已经不在finalizers中,则添加
redis.Finalizers = append(redis.Finalizers, finalizerPodName)
updateFlag = true
}
//更新Kind Redis状态
if updateFlag {
err := r.Client.Update(ctx, redis)
if err != nil {
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
//删除POD逻辑
func (r *RedisReconciler) clearRedis(ctx context.Context, redis *myapp1v1.Redis) error {
//从finalizers中取出podName,然后执行删除
for _, finalizer := range redis.Finalizers {
//删除pod
err := r.Client.Delete(ctx, &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: finalizer,
Namespace: redis.Namespace,
},
})
if err != nil {
return err
}
}
//清空finializers,只要它有值,就无法删除Kind
redis.Finalizers = []string{}
return r.Client.Update(ctx, redis)
}
make run & test
控制台运行make run&&控制台2运行apply命令:
[zhangpeng@zhangpeng kube-oprator1]$ kubectl apply -f test/redis.yaml
redis.myapp1.zhangpeng.com/zhangpeng3 configured
test/redis.yaml
apiVersion: myapp1.zhangpeng.com/v1
kind: Redis
metadata:
name: zhangpeng3
spec:
port: 6379
num: 2
image: redis:latest
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get pods
[zhangpeng@zhangpeng kube-oprator1]$ kubectl apply -f test/redis.yaml
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get pods
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis/zhangpeng3 -o yaml
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete test/redis.yaml
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get pods
紧接着问题又来了:
创建副本为2: 修改副本数为3: 紧接着缩容副本为2,会出现怎么操作呢? 没有自动去调度,现在需要一个自动缩容的方法……
Pod资源缩容
pod的标签定义的是zhangpeng-0 zhangpeng-1 zhangpeng-2的类型,扩容是 0 1 2 3 4的递增模式,缩容也从4 3 2 1 0依次去删除 pod 注意:默认情况下前面一步的Redis资源已经删除(kubectl delete -f test/redis.yaml)
Reconcile修改
controllers/redis_controller.go 中Reconcile修改为如下:
func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// TODO(user): your logic here
redis := &myapp1v1.Redis{}
if err := r.Client.Get(ctx, req.NamespacedName, redis); err != nil {
if errors.IsNotFound(err) {
return ctrl.Result{}, nil
}
fmt.Println("object", redis)
}
if !redis.DeletionTimestamp.IsZero() || len(redis.Finalizers) > redis.Spec.Num {
return ctrl.Result{}, r.clearRedis(ctx, redis)
}
if !redis.DeletionTimestamp.IsZero() || len(redis.Finalizers) > redis.Spec.Num {
return ctrl.Result{}, r.clearRedis(ctx, redis)
}
podNames := helper.GetRedisPodNames(redis)
fmt.Println("podNames,", podNames)
updateFlag := false
for _, podName := range podNames {
finalizerPodName, err := helper.CreateRedis(r.Client, redis, podName)
if err != nil {
fmt.Println("create pod failue,", err)
return ctrl.Result{}, err
}
if finalizerPodName == "" {
continue
}
redis.Finalizers = append(redis.Finalizers, finalizerPodName)
updateFlag = true
}
//更新Kind Redis状态
if updateFlag {
err := r.Client.Update(ctx, redis)
if err != nil {
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
func (r *RedisReconciler) clearRedis(ctx context.Context, redis *myapp1v1.Redis) error {
/*
//finalizers > num 可能出现,删除差值部分
//finalizers = num 可能出现,删除全部
//finalizers < num 不可能出现
*/
var deletedPodNames []string
//删除finalizers切片的最后的指定元素
position := redis.Spec.Num
if (len(redis.Finalizers) - redis.Spec.Num) > 0 {
deletedPodNames = redis.Finalizers[position:]
redis.Finalizers = redis.Finalizers[:position]
} else {
deletedPodNames = redis.Finalizers[:position]
redis.Finalizers = []string{}
}
fmt.Println("deletedPodNames", deletedPodNames)
fmt.Println("redis.Finalizers", redis.Finalizers)
for _, finalizer := range deletedPodNames {
//删除pod
err := r.Client.Delete(ctx, &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: finalizer,
Namespace: redis.Namespace,
},
})
if err != nil {
return err
}
}
redis.Finalizers = []string{}
return r.Client.Update(ctx, redis)
}
make run 开始测试
控制台执行make run……控制台2 执行apply -f test/redis.yaml创建3个副本pod
apiVersion: myapp1.zhangpeng.com/v1
kind: Redis
metadata:
name: zhangpeng3
spec:
port: 6379
num: 3
image: redis:latest
修改yaml副本数量为2,缩容pod apply yaml文件,缩容成功,删除了zhangpeng3-2pod! but!问题来了我接着想扩容副本数量到3…..不能正常操作了……不能生成新的pod 观查一下make run报错…… 删除这几个资源重新来一遍,仔细观察一下步骤看看缺少了什么: 删除资源的时候明显发现绑定到pod跟Redis的资源没有绑定了? apply yaml 副本数量为2, Finalizers中数据: 修改副本数为2……这 标签就没有了? 继续修改副本数为3….恩问题应该还是出在绑定finalizers这里……
最终代码如下:
以下代码摘抄自https://github.com/ls-2018/k8s-kustomize
实现扩容缩容删除恢复:
helper/redis_help.redis
package helper
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
v1 "kube-oprator1/api/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
)
func GetRedisPodNames(redisConfig *v1.Redis) []string {
podNames := make([]string, redisConfig.Spec.Num)
fmt.Printf("%+v", redisConfig)
for i := 0; i < redisConfig.Spec.Num; i++ {
podNames[i] = fmt.Sprintf("%s-%d", redisConfig.Name, i)
}
fmt.Println("PodNames: ", podNames)
return podNames
}
// 判断 redis pod 是否能获取
func IsExistPod(podName string, redis *v1.Redis, client client.Client) bool {
err := client.Get(context.Background(), types.NamespacedName{
Namespace: redis.Namespace,
Name: podName,
},
&corev1.Pod{},
)
if err != nil {
return false
}
return true
}
func IsExistInFinalizers(podName string, redis *v1.Redis) bool {
for _, fPodName := range redis.Finalizers {
if podName == fPodName {
return true
}
}
return false
}
func CreateRedis(client client.Client, redisConfig *v1.Redis, podName string, schema *runtime.Scheme) (string, error) {
if IsExistPod(podName, redisConfig, client) {
return podName, nil
}
newPod := &corev1.Pod{}
newPod.Name = podName
newPod.Namespace = redisConfig.Namespace
newPod.Spec.Containers =append(newPod.Spec.Containers, []corev1.Container{
Name: redisConfig.Name,
Image: redisConfig.Spec.Image,
ImagePullPolicy: corev1.PullIfNotPresent,
Ports: []corev1.ContainerPort{
{
ContainerPort: int32(redisConfig.Spec.Port),
},
},
})
// set owner reference
err := controllerutil.SetControllerReference(redisConfig, newPod, schema)
if err != nil {
return "", err
}
return podName, client.Create(context.Background(), newPod)
}
controllers/redis_controller.go
/*
Copyright 2022 zhang peng.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/workqueue"
myapp1v1 "kube-oprator1/api/v1"
"kube-oprator1/helper"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
)
// RedisReconciler reconciles a Redis object
type RedisReconciler struct {
client.Client
Scheme *runtime.Scheme
EventRecord record.EventRecorder
}
//+kubebuilder:rbac:groups=myapp1.zhangpeng.com,resources=redis,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=myapp1.zhangpeng.com,resources=redis/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=myapp1.zhangpeng.com,resources=redis/finalizers,verbs=update
// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the Redis object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.11.2/pkg/reconcile
func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
redis := &myapp1v1.Redis{}
// 删除后会往这发一个请求,只有NamespacedName数据,别的都没有
if err := r.Get(ctx, req.NamespacedName, redis); err != nil {
return ctrl.Result{}, nil
}
// 正在删除
if !redis.DeletionTimestamp.IsZero() {
return ctrl.Result{}, r.clearPods(context.Background(), redis)
}
// TODO pod 个数变动
podNames := helper.GetRedisPodNames(redis)
var err error
if redis.Spec.Num > len(redis.Finalizers) {
err = r.UpPods(ctx, podNames, redis)
if err == nil {
fmt.Printf("%d", redis.Spec.Num)
//r.EventRecord.Event(redis, corev1.EventTypeNormal, "UpPods", fmt.Sprintf("%d", redis.Spec.Num))
} else {
// r.EventRecord.Event(redis, corev1.EventTypeWarning, "DownPods", fmt.Sprintf("%d", redis.Spec.Num))
}
} else if redis.Spec.Num < len(redis.Finalizers) {
err = r.DownPods(ctx, podNames, redis)
if err == nil {
// r.EventRecord.Event(redis, corev1.EventTypeNormal, "DownPods", fmt.Sprintf("%d", redis.Spec.Num))
} else {
// r.EventRecord.Event(redis, corev1.EventTypeWarning, "DownPods", fmt.Sprintf("%d", redis.Spec.Num))
}
redis.Status.RedisNum = len(redis.Finalizers)
} else {
for _, podName := range redis.Finalizers {
if helper.IsExistPod(podName, redis, r.Client) {
continue
} else {
// 重建此pod
err = r.UpPods(ctx, []string{podName}, redis)
if err != nil {
return ctrl.Result{}, err
}
}
}
}
r.Status().Update(ctx, redis)
return ctrl.Result{}, err
}
// SetupWithManager sets up the controller with the Manager.
func (r *RedisReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&myapp1v1.Redis{}).
Watches(&source.Kind{Type: &corev1.Pod{}}, handler.Funcs{
CreateFunc: nil,
UpdateFunc: nil,
DeleteFunc: r.podDeleteHandler,
GenericFunc: nil,
}).
Complete(r)
}
// 对于用户主动 删除的pod 需要重新创建
func (r *RedisReconciler) podDeleteHandler(event event.DeleteEvent, limitingInterface workqueue.RateLimitingInterface) {
fmt.Printf(`######################
%s
######################
`, event.Object.GetName())
for _, ref := range event.Object.GetOwnerReferences() {
if ref.Kind == r.kind() && ref.APIVersion == r.apiVersion() {
//触发 Reconcile
limitingInterface.Add(reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: event.Object.GetNamespace(),
Name: ref.Name,
},
})
}
}
}
func (r *RedisReconciler) kind() string {
return "Redis"
}
func (r *RedisReconciler) apiVersion() string {
return "myapp1.zhangpeng.com/v1"
}
func (r *RedisReconciler) clearPods(ctx context.Context, redis *myapp1v1.Redis) error {
for _, podName := range redis.Finalizers {
err := r.Client.Delete(ctx, &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Namespace: redis.Namespace,
},
})
//TODO 如果pod已经被删除、
if err != nil {
return err
}
}
redis.Finalizers = []string{}
return r.Client.Update(ctx, redis)
}
func (r *RedisReconciler) DownPods(ctx context.Context, podNames []string, redis *myapp1v1.Redis) error {
for i := len(redis.Finalizers) - 1; i >= len(podNames); i-- {
if !helper.IsExistPod(redis.Finalizers[i], redis, r.Client) {
continue
}
err := r.Client.Delete(ctx, &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: redis.Finalizers[i],
Namespace: redis.Namespace,
},
})
if err != nil {
return err
}
}
redis.Finalizers = append(redis.Finalizers[:0], redis.Finalizers[:len(podNames)]...)
return r.Client.Update(ctx, redis)
}
func (r *RedisReconciler) UpPods(ctx context.Context, podNames []string, redis *myapp1v1.Redis) error {
for _, podName := range podNames {
podName, err := helper.CreateRedis(r.Client, redis, podName, r.Scheme)
if err != nil {
return err
}
if controllerutil.ContainsFinalizer(redis, podName) {
continue
}
redis.Finalizers = append(redis.Finalizers, podName)
}
err := r.Client.Update(ctx, redis)
return err
}
删除zhangpeng3 Redis
基本功能算是完成了,测试一下Pod重建:
其他的
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis
api/v1/redis_types.go
// RedisStatus defines the observed state of Redis
type RedisStatus struct {
RedisNum int `json:"num,omitempty"`
}
//下面两个注释对应的是 kubectl get Redis 看到的状态
//+kubebuilder:printcolumn:JSONPath=".status.num",name=NUM,type=integer
//+kubebuilder:printcolumn:JSONPath=".metadata.creationTimestamp",name=AGE,type=date
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
[zhangpeng@zhangpeng kube-oprator1]$ make install
[zhangpeng@zhangpeng kube-oprator1]$ make run
看一下config/crd/bases/myapp1.zhangpeng.com_redis.yaml
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis
增加Event事件支持
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get Redis
NAME NUM AGE
zhangpeng3 2 82m
[zhangpeng@zhangpeng kube-oprator1]$ kubectl describe Redis zhangpeng3
观察我redis_controller.go代码的时候发现我注释了这些代码……这些其实就是event的输出,不知道怎么会事情就空指针报错了 ,就注释掉了! 什么原因呢?参照:https://github.com/767829413/learn-to-apply/blob/3621065003918070635880c3348f1d72bf6ead88/docs/Kubernetes/kubernetes-secondary-development.md 修改main.go
if err = (&controllers.RedisReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
EventRecord: mgr.GetEventRecorderFor("myapp1.zhangpeng.com"), //名称任意
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Redis")
os.Exit(1)
}
注:刚才修改副本为2了,又修改成4进行测试的!
最终打包发布:
webhook
打包发布第一步骤如果开启webhook
IMG build
[zhangpeng@zhangpeng kube-oprator1]$ make install
[zhangpeng@zhangpeng kube-oprator1]$ make docker-build docker-push IMG=ccr.ccs.tencentyun.com/layatools/zpredis:v5
Dockerfile中增加一下配置:
COPY helper/ helper/
继续执行构建上传镜像:
[zhangpeng@zhangpeng kube-oprator1]$ make docker-build docker-push IMG=ccr.ccs.tencentyun.com/layatools/zpredis:v5
发布
正常流程:
[zhangpeng@zhangpeng kube-oprator1]$ make deploy IMG=ccr.ccs.tencentyun.com/layatools/zpredis:v5
由于 我的环境问题还要拆解一下命令:
[zhangpeng@zhangpeng kube-oprator1]$ cd config/manager && kustomize edit set image controller=ccr.ccs.tencentyun.com/layatools/zpredis:v5
[zhangpeng@zhangpeng manager]$ cd ../../
[zhangpeng@zhangpeng kube-oprator1]$ kustomize build config/default | kubectl apply -f -
权限
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get pods -n kube-oprator1-system
NAME READY STATUS RESTARTS AGE
kube-oprator1-controller-manager-84c7dcf9fc-xxnk8 2/2 Running
[zhangpeng@zhangpeng kube-oprator1]$ kubectl logs -f kube-oprator1-controller-manager-84c7dcf9fc-xxnk8 -n kube-oprator1-system
恩 权限不够 临时搞一个clusterrolebinding :
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get sa kube-oprator1-controller-manager -n kube-oprator1-system -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"kube-oprator1-controller-manager","namespace":"kube-oprator1-system"}}
creationTimestamp: "2022-06-30T10:53:04Z"
name: kube-oprator1-controller-manager
namespace: kube-oprator1-system
resourceVersion: "1917"
uid: 46634b00-0c32-4862-8afb-17bb9d2181d0
secrets:
- name: kube-oprator1-controller-manager-token-x2c9f
[zhangpeng@zhangpeng kube-oprator1]$ kubectl create clusterrolebinding kube-oprator1-system --clusterrole cluster-admin --serviceaccount=kube-oprator1-system:kube-oprator1-controller-manager
clusterrolebinding.rbac.authorization.k8s.io/kube-oprator1-system created
delete pod等其启动:
[zhangpeng@zhangpeng kube-oprator1]$ kubectl delete pods kube-oprator1-controller-manager-84c7dcf9fc-xxnk8 -n kube-oprator1-system
pod "kube-oprator1-controller-manager-84c7dcf9fc-xxnk8" deleted
[zhangpeng@zhangpeng kube-oprator1]$ kubectl get pods -n kube-oprator1-system
NAME READY STATUS RESTARTS AGE
kube-oprator1-controller-manager-84c7dcf9fc-fbzsl 1/2 Running 0 4s
[zhangpeng@zhangpeng kube-oprator1]$ kubectl logs -f kube-oprator1-controller-manager-84c7dcf9fc-fbzsl -n kube-oprator1-system
修改副本3为2: 基本完成,不过深究一下,pod删除了重建是不是也可以增加一下记录?
总结:
参考一下github地址或者文章: https://github.com/ls-2018/k8s-kustomize 关于Event修改main.go参照:https://github.com/767829413/learn-to-apply/blob/3621065003918070635880c3348f1d72bf6ead88/docs/Kubernetes/kubernetes-secondary-development.md 文章可以参考:https://www.cnblogs.com/cosmos-wong/p/15894689.html https://podsbook.com/posts/kubernetes/operator/#%E5%9F%BA%E7%A1%80%E6%A6%82%E5%BF%B5这个文章也不错! finalizers参照:https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/ 至于关于redis operator的文章以及github地址来说除了https://podsbook.com/posts/kubernetes/operator 还有finalizers官方文档,都应该是沈叔的课程k8s基础速学3:Operator、Prometheus、日志收集中的内容来的! 接下来准备写一下自己的operator……