Verrazzano本地搭建实战(基于Kubernetes 1.16.15)

tech2024-06-06  68

事前准备2台centos 8.2虚拟机

k8s119-master 192.168.31.12 k8s119-node1 192.168.31.13

1,给用户添加group

groupadd -g 1001 oinstall usermod -a -G oinstall oracle

2,创建nfs共享存储

2-1,master机器

systemctl enable --now nfs-server mkdir /mnt/nfs-shares echo "/mnt/nfs-shares *(rw,sync,insecure,no_root_squash,no_all_squash,no_subtree_check)" > /etc/exports exportfs -arv exportfs -s mkdir -p /u01/nfs-shares mount 192.168.31.12:/mnt/nfs-shares /u01/nfs-shares chown oracle:oracle /u01/nfs-shares chmod 777 /u01/nfs-shares echo "192.168.31.12:/mnt/nfs-shares /u01/nfs-shares nfs defaults 0 0" >> /etc/fstab

2-2,node1机器

mkdir -p /u01/nfs-shares mount 192.168.31.12:/mnt/nfs-shares /u01/nfs-shares echo "192.168.31.12:/mnt/nfs-shares /u01/nfs-shares nfs defaults 0 0" >> /etc/fstab

2-3,master机器和node1机器上修改docker root路径(verrazzano fluentd的默认使用了/u01/data/docker)

sudo su - systemctl stop docker mkdir -p /u01/data mv /var/lib/docker /u01/data/ ln -s /u01/data/docker /var/lib/docker systemctl start docker

2-4,master机器和node1机器上修改timezone

timedatectl set-timezone UTC

3,创建Kubernetes 1.17.11集群(详细略)

3-1,创建master(192.168.31.12)节点 3-1-1,创建kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.31.12 bindPort: 6443 --- apiServer: timeoutForControlPlane: 4m0s certSANs: - oke.server.k8scloud.site apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd serverCertSANs: - oke.server.k8scloud.site #imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.17.11 networking: dnsDomain: cluster.local serviceSubnet: 10.100.0.0/16 podSubnet: 10.200.0.0/16 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs kubeadm init --config kubeadm-config.yaml curl https://docs.projectcalico.org/v3.10/manifests/calico.yaml -O

 修改calico.yaml(enp0s3是网卡名称,请根据情况修改)

---修改 - name: CALICO_IPV4POOL_CIDR value: "10.200.0.0/16" --- ---追加 - name: IP_AUTODETECTION_METHOD value: "interface=enp0s3" --- kubectl apply -f calico.yaml

3-2,创建node1节点

kubeadm join 192.168.31.12:6443 --token 2sovak.660dwvivi5oca237 \ --discovery-token-ca-cert-hash sha256:c0eb40eee0e02a50450a6fb74c2e46c3e6745a388264df2d6c2260f7af61dd43

3-3,确认集群,

[oracle@k8s119-master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s119-master Ready master 14m v1.17.11 k8s119-node1 Ready <none> 7m33s v1.17.11 [oracle@k8s119-master ~]$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7994b948dd-l56h8 1/1 Running 0 4m58s kube-system calico-node-plxr9 1/1 Running 0 4m58s kube-system calico-node-zsfc7 1/1 Running 0 4m58s kube-system coredns-6955765f44-clmln 1/1 Running 0 14m kube-system coredns-6955765f44-vdmbm 1/1 Running 0 14m kube-system etcd-k8s119-master 1/1 Running 0 14m kube-system kube-apiserver-k8s119-master 1/1 Running 0 14m kube-system kube-controller-manager-k8s119-master 1/1 Running 0 14m kube-system kube-proxy-kzp69 1/1 Running 0 14m kube-system kube-proxy-x8nfp 1/1 Running 0 7m38s kube-system kube-scheduler-k8s119-master 1/1 Running 0 14m

4,创建storageClass,

#### 获取[local-path-provisioner](https://github.com/rancher/local-path-provisioner)

git clone https://github.com/rancher/local-path-provisioner.git

#### 修改local-path-storage.yaml

vi local-path-provisioner/deploy/local-path-storage.yaml

修改前(修改部分):

kind: ConfigMap apiVersion: v1 metadata:   name: local-path-config   namespace: local-path-storage data:   config.json: |-         {                 "nodePathMap":[                 {                         "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",                         "paths":["/opt/local-path-provisioner"]                 }                 ]         }

修改后(修改部分):

kind: ConfigMap apiVersion: v1 metadata:   name: local-path-config   namespace: local-path-storage data:   config.json: |-         {                 "nodePathMap":[                 {                         "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",                         "paths":["/u01/nfs-shares"]                 }                 ]         }

#### 创建namespace

kubectl create ns local-path-storage

#### 发布local-path-storage

kubectl apply -f local-path-provisioner/deploy/local-path-storage.yaml -n local-path-storage

#### 设置为default的storageclass

kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

5,安装metallb,

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/manifests/namespace.yaml kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/manifests/metallb.yaml

创建metallb-config.yaml,

apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.31.15-192.168.31.16 kubectl apply -f metallb-config.yaml

确认,

[oracle@k8s119-master workspace]$ kubectl get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-7fb45985f9-m9wjr 1/1 Running 0 14m speaker-fc2rs 1/1 Running 0 8m59s speaker-fmsjm 1/1 Running 0 8m48s

6,准备安装Verrazzano,

安装jq,

sudo dnf install -y jq

安装helm,

wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz tar zxvf helm-v3.1.2-linux-amd64.tar.gz && rm helm-v3.1.2-linux-amd64.tar.gz chmod +x linux-amd64/helm sudo mv linux-amd64/helm /usr/local/bin && rm -rf linux-amd64 [oracle@k8s119-node1 install]$ helm version version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}

下载Verrazzano, 

git clone https://github.com/verrazzano/verrazzano.git && cd verrazzano/install

设置环境变量, 

export CLUSTER_TYPE=OKE export VERRAZZANO_KUBECONFIG=/home/oracle/.kube/config export KUBECONFIG=$VERRAZZANO_KUBECONFIG

Create the Oracle Container Registry secret

kubectl create secret docker-registry ocr \ --docker-username=<username> \ --docker-password=<password> \ --docker-server=container-registry.oracle.com

7,安装Istio,

./1-install-istio.sh

输出结果,

[oracle@k8s119-master install]$ ./1-install-istio.sh Output redirected to /u01/workspace/verrazzano/install/build/logs/1-install-istio.sh.log Checking Kubernetes version [ OK ] Waiting for all Kubernetes nodes to exist in cluster [ OK ] Waiting for all Kubernetes nodes to be ready [ OK ] Creating istio-system namespace [ OK ] Generating Istio CA bundle [ OK ] Installing Istio [ OK ] Updating CoreDNS configuration [ OK ]

确认istio-system中的pods都正常启动,

[oracle@k8s119-master install]$ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-7476f59757-gzn4n 1/1 Running 0 2m51s istio-citadel-86dd848d46-8bkx9 1/1 Running 0 2m50s istio-egressgateway-75f9fc9f79-b69bv 1/1 Running 0 2m51s istio-galley-5c4b777754-k27ht 1/1 Running 0 2m51s istio-grafana-post-install-1.4.6-qkfhc 0/1 Completed 0 2m50s istio-ingressgateway-687cb57fc4-fhgd6 1/1 Running 0 2m51s istio-init-crd-10-1.4.6-d52jg 0/1 Completed 0 2m57s istio-init-crd-11-1.4.6-rj8nf 0/1 Completed 0 2m57s istio-init-crd-14-1.4.6-qfvk9 0/1 Completed 0 2m57s istio-pilot-c5d66bd88-rh2n6 2/2 Running 0 2m50s istio-policy-b556b464f-5jl5k 2/2 Running 2 2m51s istio-security-post-install-1.4.6-fj789 0/1 Completed 0 2m50s istio-sidecar-injector-64c78c5b6-qlj7c 1/1 Running 0 2m50s istio-telemetry-66cb48c6b-8b277 2/2 Running 2 2m51s istiocoredns-844f8b6454-cfzh9 2/2 Running 0 2m51s prometheus-85959bb46-f8jsf 1/1 Running 0 2m50s

8,安装system-components-magicdns,

执行2a-install-system-components-magicdns.sh,

./2a-install-system-components-magicdns.sh

输出结果,

Output redirected to /u01/workspace/verrazzano/install/build/logs/2a-install-system-components-magicdns.sh.log Installing NGINX ingress controller [ OK ] Installing certificate manager [ OK ] Installing Rancher [ OK ]

确认结果,

[oracle@k8s119-master install]$ kubectl get pods -n cattle-system NAME READY STATUS RESTARTS AGE rancher-5fb77dbdfd-6d6xh 1/1 Running 0 9m22s rancher-5fb77dbdfd-w5xvq 1/1 Running 1 9m22s rancher-5fb77dbdfd-wdz5s 1/1 Running 0 9m22s [oracle@k8s119-master install]$ kubectl get pods -n cert-manager NAME READY STATUS RESTARTS AGE cert-manager-bf85f547f-qm56j 1/1 Running 0 9m47s [oracle@k8s119-master install]$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-controller-nginx-ingress-controller-5bbc88d9df-cfdf6 1/1 Running 0 10m ingress-controller-nginx-ingress-default-backend-5bc666bbfrrrbz 1/1 Running 0 10m

确认获取EXTERNAL-IP,

[oracle@k8s119-master u01]$ kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-controller-nginx-ingress-controller LoadBalancer 10.100.182.3 192.168.31.16 80:32431/TCP,443:31610/TCP 30m ingress-controller-nginx-ingress-controller-metrics ClusterIP 10.100.26.213 <none> 9913/TCP 30m ingress-controller-nginx-ingress-default-backend ClusterIP 10.100.124.73 <none> 80/TCP 30m

修改master主机和node1主机的/etc/hosts,

vi /etc/hosts ---追加 192.168.31.16 rancher.default.192.168.31.16.xip.io keycloak.default.192.168.31.16.xip.io grafana.vmi.system.default.192.168.31.16.xip.io prometheus.vmi.system.default.192.168.31.16.xip.io kibana.vmi.system.default.192.168.31.16.xip.io elasticsearch.vmi.system.default.192.168.31.16.xip.io --- ---追加 192.168.31.15 bobbys-books.v8o.xip.io bobs-books.v8o.xip.io roberts-books.v8o.xip.io ---

9,安装Verrazzano

执行3-install-verrazzano.sh,

./3-install-verrazzano.sh

输出结果,

[oracle@k8s119-master install]$ ./3-install-verrazzano.sh Output redirected to /u01/workspace/verrazzano/install/build/logs/3-install-verrazzano.sh.log Getting ingress address [ OK ] Checking ingress ports [ OK ] Creating verrazzano-system namespace [ OK ] Creating admission controller cert [ OK ] Installing Verrazzano system components [ OK ]

然后执行,

kubectl -n verrazzano-system patch deployments verrazzano-cluster-operator --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "rancher.default.192.168.31.16.xip.io" ], "ip": "192.168.31.16" } ] } } } }' kubectl -n verrazzano-system patch deployments verrazzano-operator --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "rancher.default.192.168.31.16.xip.io" ], "ip": "192.168.31.16" } ] } } } }'

确认结果,

[oracle@k8s119-master nfs-shares]$ kubectl get pods -n verrazzano-system NAME READY STATUS RESTARTS AGE verrazzano-admission-controller-6df75568ff-hvc9k 1/1 Running 1 22m verrazzano-cluster-operator-5755d99648-56zsx 1/1 Running 1 22m verrazzano-monitoring-operator-686669dd6-xfpbn 1/1 Running 2 22m verrazzano-operator-795865c45b-s8mxj 1/1 Running 1 22m vmi-system-api-656ff49cfd-lpvss 1/1 Running 1 22m vmi-system-es-data-0-78fff8656d-657p7 2/2 Running 0 107s vmi-system-es-data-1-869959cc89-npbqn 2/2 Running 0 107s vmi-system-es-ingest-579578bdb4-t4qb6 1/1 Running 1 13m vmi-system-es-master-0 1/1 Running 1 13m vmi-system-es-master-1 1/1 Running 1 13m vmi-system-es-master-2 1/1 Running 1 13m vmi-system-grafana-7679f58d9c-gr9pz 1/1 Running 1 13m vmi-system-kibana-5d878bb944-59pz5 1/1 Running 1 22m vmi-system-prometheus-0-6459c5456b-twqzx 3/3 Running 3 13m vmi-system-prometheus-gw-876774b85-jmxzd 1/1 Running 1 13m

10,安装keycloak,

执行4-install-keycloak.sh

./4-install-keycloak.sh

输出结果,

[oracle@k8s119-master install]$ ./4-install-keycloak.sh Output redirected to /u01/workspace/verrazzano/install/build/logs/4-install-keycloak.sh.log Installing MySQL [ OK ] Installing Keycloak [ OK ] Setting Rancher Server URL [ OK ] Installation Complete. Verrazzano provides various user interfaces. Grafana - https://grafana.vmi.system.default.192.168.31.16.xip.io Prometheus - https://prometheus.vmi.system.default.192.168.31.16.xip.io Kibana - https://kibana.vmi.system.default.192.168.31.16.xip.io Elasticsearch - https://elasticsearch.vmi.system.default.192.168.31.16.xip.io You will need the credentials to access the preceding user interfaces. They are all accessed by the same username/password. User: verrazzano Password: kubectl get secret --namespace verrazzano-system verrazzano -o jsonpath={.data.password} | base64 --decode; echo Rancher - https://rancher.default.192.168.31.16.xip.io User: admin Password: kubectl get secret --namespace cattle-system rancher-admin-secret -o jsonpath={.data.password} | base64 --decode; echo Keycloak - https://keycloak.default.192.168.31.16.xip.io User: keycloakadmin Password: kubectl get secret --namespace keycloak keycloak-http -o jsonpath={.data.password} | base64 --decode; echo

然后执行,

kubectl -n cattle-system patch deployments cattle-cluster-agent --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "rancher.default.192.168.31.16.xip.io" ], "ip": "192.168.31.16" } ] } } } }'

确认结果,

[oracle@k8s119-master ~]$ kubectl get pods -n keycloak NAME READY STATUS RESTARTS AGE keycloak-0 1/1 Running 0 5m31s mysql-547b9b9ff6-dqpkt 1/1 Running 0 8m

 

11,发布bobs-books示例,

kubectl create ns bob kubectl create secret docker-registry ocr \ --docker-server=container-registry.oracle.com \ --docker-username=YOUR_USERNAME \ --docker-password=YOUR_PASSWORD \ --docker-email=YOUR_EMAIL kubectl create secret generic bobs-bookstore-weblogic-credentials \ --from-literal=username=weblogic \ --from-literal=password=welcome1 kubectl create secret generic bobbys-front-end-weblogic-credentials \ --from-literal=username=weblogic \ --from-literal=password=welcome1 kubectl create ns bob kubectl create secret generic mysql-credentials \ --from-literal=username=books \ --from-literal=password=WebLogic1234 -n bob kubectl apply -f mysql.yaml kubectl apply -f bobs-books-model.yaml vi bobs-books-binding.yaml ---修改前 ingressBindings: - name: "bobbys-ingress" dnsName: "*" - name: "bobs-ingress" dnsName: "*" - name: "roberts-ingress" dnsName: "*" --- ---修改后 ingressBindings: - name: "bobbys-ingress" dnsName: "bobbys-books.v8o.xip.io" - name: "bobs-ingress" dnsName: "bobs-books.v8o.xip.io" - name: "roberts-ingress" dnsName: "roberts-books.v8o.xip.io" --- kubectl apply -f bobs-books-binding.yaml

 关于Rancher,Keycloak,Grafana等,还有bobs-books的访问,请参考OKE上搭建Verrazzano实战

99,其他错误解决,

错误1,cattle-cluster-agent-xxx not running

kubectl -n cattle-system patch deployments cattle-cluster-agent --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "rancher.default.192.168.31.16.xip.io" ], "ip": "192.168.31.16" } ] } } } }'

错误2,verrazzano-cluster-operator和verrazzano-operator,dial tcp: lookup rancher.default.192.168.31.16.xip.io on 10.100.0.10:53: no such host

kubectl -n verrazzano-system patch deployments verrazzano-cluster-operator --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "rancher.default.192.168.31.16.xip.io" ], "ip": "192.168.31.16" } ] } } } }' kubectl -n verrazzano-system patch deployments verrazzano-operator --patch '{ "spec": { "template": { "spec": { "hostAliases": [ { "hostnames": [ "rancher.default.192.168.31.16.xip.io" ], "ip": "192.168.31.16" } ] } } } }'

错误3,重启后,vmi-system-es-data-0-xxx vmi-system-es-data-1-xxx not running,

方法1,删除pv路径下_state文件夹

kubectl -n verrazzano-system get po | grep vmi-system-es-data --- pod/vmi-system-es-data-0-78fff8656d-fj62x 1/2 Running 2 82m pod/vmi-system-es-data-1-869959cc89-44ch9 1/2 Running 0 76m --- # 然后执行 kubectl -n verrazzano-system delete pod/vmi-system-es-data-0-78fff8656d-fj62x pod/vmi-system-es-data-1-869959cc89-44ch9

方法2,

kubectl -n verrazzano-system get po,pvc | grep vmi-system-es-data --- pod/vmi-system-es-data-0-78fff8656d-fj62x 1/2 Running 2 82m pod/vmi-system-es-data-1-869959cc89-44ch9 1/2 Running 0 76m persistentvolumeclaim/vmi-system-es-data Bound pvc-e29f2a93-fe0c-44d5-848b-b18e9a20bdd6 50Gi RWO local-path 4h18m persistentvolumeclaim/vmi-system-es-data-1 Bound pvc-a4de3cd8-e671-4826-8947-a38af03834a3 50Gi RWO local-path 4h18m --- # 然后执行。(顺序是先pvc,然后pod) kubectl -n verrazzano-system delete persistentvolumeclaim/vmi-system-es-data persistentvolumeclaim/vmi-system-es-data-1 pod/vmi-system-es-data-0-78fff8656d-fj62x pod/vmi-system-es-data-1-869959cc89-44ch9

 错误4,重启后,vvmi-bobs-books-binding-es-data-0-xxx vmi-bobs-books-binding-es-data-1-xxx not running,

方法1,删除pv路径下_state文件夹

kubectl -n verrazzano-system get po,pvc | grep vmi-bobs-books-binding-es-data --- pod/vmi-bobs-books-binding-es-data-0-64ddc6b6f6-jb5zh 1/2 Running 0 90s pod/vmi-bobs-books-binding-es-data-1-85648c97df-2lp44 1/2 Running 0 90s --- # 然后执行 kubectl -n verrazzano-system delete pod/vmi-bobs-books-binding-es-data-0-64ddc6b6f6-jb5zh pod/vmi-bobs-books-binding-es-data-1-85648c97df-2lp44

方法2,

kubectl -n verrazzano-system get po,pvc | grep vmi-bobs-books-binding-es-data --- persistentvolumeclaim/vmi-bobs-books-binding-es-data Bound pvc-ff12bade-249a-49f4-a218-23cb9d85ea68 50Gi RWO local-path 17h persistentvolumeclaim/vmi-bobs-books-binding-es-data-1 Bound pvc-03538a25-9822-4f77-970f-42a258c56eb8 50Gi RWO local-path 17h pod/vmi-bobs-books-binding-es-data-0-64ddc6b6f6-jb5zh 1/2 Running 0 90s pod/vmi-bobs-books-binding-es-data-1-85648c97df-2lp44 1/2 Running 0 90s --- # 然后执行。(顺序是先pvc,然后pod) kubectl -n verrazzano-system delete persistentvolumeclaim/vmi-bobs-books-binding-es-data persistentvolumeclaim/vmi-bobs-books-binding-es-data-1 pod/vmi-bobs-books-binding-es-data-0-64ddc6b6f6-jb5zh pod/vmi-bobs-books-binding-es-data-1-85648c97df-2lp44

错误5,elasticsearch failed to parse [kubernetes.labels.app],

# 创建myfilebeat.yml vi myfilebeat.yml ---add filebeat.config: inputs: # Mounted filebeat-inputs configmap: path: ${path.config}/inputs.d/*.yml # Reload inputs configs as they change: reload.enabled: false modules: path: ${path.config}/modules.d/*.yml # Reload module configs as they change: reload.enabled: false name: ${NODENAME} filebeat.inputs: - type: docker containers.ids: - "*" processors: - add_cloud_metadata: ~ - rename: when: has_fields: ['kubernetes.labels.app.kubernetes.io/name'] fields: - from: 'kubernetes.labels.app' to: 'kubernetes.labels.appobject' ignore_missing: true fail_on_error: false - rename: when: has_fields: ['kubernetes.labels.appobject'] fields: - from: 'kubernetes.labels.appobject.kubernetes.io/name' to: 'kubernetes.labels.app' - from: 'kubernetes.labels.appobject.kubernetes.io/part-of' to: 'kubernetes.labels.part-of' ignore_missing: true fail_on_error: false - drop_fields: when: has_fields: ['kubernetes.labels.appobject'] fields: - 'kubernetes.labels.appobject' setup.template.enabled: false output.elasticsearch: hosts: ${ES_URL} username: ${ES_USER} password: ${ES_PASSWORD} index: ${INDEX_NAME} --- # 创建run.sh vi run.sh ---add #!/bin/bash # Copyright (C) 2020, Oracle and/or its affiliates. # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl. set -x -e cat /etc/filebeat/filebeat.yml exec /usr/share/filebeat/filebeat -e -c /usr/share/filebeat/myfilebeat.yml --- # 创建Dockerfile-myfilebeat vi Dockerfile-myfilebeat ---add FROM container-registry.oracle.com/verrazzano/filebeat:6.8.3-8218206-10 COPY myfilebeat.yml /usr/share/filebeat/myfilebeat.yml COPY run.sh /usr/share/filebeat/run.sh USER root RUN chown root:root /usr/share/filebeat/myfilebeat.yml RUN chown root:root /usr/share/filebeat/run.sh RUN chmod go-w /usr/share/filebeat/myfilebeat.yml RUN chmod +x /usr/share/filebeat/run.sh --- # 创建myfilebeat镜像 docker build -t container-registry.oracle.com/verrazzano/myfilebeat:6.8.3-8218206-10 . -f Dockerfile-myfilebeat # 修改daemon-set->filebeat的镜像 kubectl edit ds filebeat -n logging --- # image: container-registry.oracle.com/verrazzano/filebeat:6.8.3-8218206-10 image: container-registry.oracle.com/verrazzano/myfilebeat:6.8.3-8218206-10 --- # 删除pod->filebeat kubectl get po -n logging | grep filebeat kubectl delete po <filebeat-pod> -n logging

完结!

 

 

最新回复(0)