Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-2473/e2e-tests/logs/security-context-8-0.log Warning: version difference between client (1.36) and server (1.33) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.36) and server (1.33) exceeds the supported minor version skew of +/-1 No resources found + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: resource(s) were provided, but no name was specified No resources found No resources found No resources found error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified namespace "pxc-operator" deleted waiting for namespace/pxc-operator to be deletedError from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2473-6d392bea-4-cluster6" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created pod/percona-xtradb-cluster-operator-55d95dc9d8-fl2qb condition met E0517 01:07:25.610608 21825 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/pxc-operator/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpercona-xtradb-cluster-operator-55d95dc9d8-fl2qb&resourceVersion=1778980045250733000&timeoutSeconds=587&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" pod/percona-xtradb-cluster-operator-55d95dc9d8-fl2qb condition met E0517 01:07:31.516094 22716 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/pxc-operator/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpercona-xtradb-cluster-operator-55d95dc9d8-fl2qb&resourceVersion=1778980049647102000&timeoutSeconds=560&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/percona-xtradb-cluster-operator-55d95dc9d8-fl2qb to become Ready.Ok error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces security-context-8393 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "security-context-8393" not found waiting for namespace/security-context-8393 to be deletederror: resource(s) were provided, but no name was specified Error from server (NotFound): namespaces "security-context-8393" not found ----------------------------------------------------------------------------------- create namespace security-context-8393 ----------------------------------------------------------------------------------- namespace/security-context-8393 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2473-6d392bea-4-cluster6" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/do-spaces-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- deploy cert manager ----------------------------------------------------------------------------------- namespace/cert-manager created namespace/cert-manager labeled namespace/cert-manager configured customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-tokenrequest created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-tokenrequest created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager-cainjector created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created Warning: resource namespaces/cert-manager is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. serviceaccount/percona-xtradb-cluster-operator-workload created ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/sec-context created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "sec-context-proxysql-0" not found waiting for pod/sec-context-proxysql-0 to become Ready...........Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/sec-context-pxc-0 condition met waiting for pod/sec-context-pxc-0 to become Ready.Ok pod/sec-context-pxc-1 condition met waiting for pod/sec-context-pxc-1 to become Ready.Ok pod/sec-context-pxc-2 condition met waiting for pod/sec-context-pxc-2 to become Ready.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-67fc4995bb-wc7zb condition met E0517 01:13:28.601614 23270 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/security-context-8393/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpxc-client-67fc4995bb-wc7zb&resourceVersion=1778980407844607000&timeoutSeconds=594&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/pxc-client-67fc4995bb-wc7zb to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-67fc4995bb-wc7zb condition met E0517 01:13:32.553033 23529 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/security-context-8393/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpxc-client-67fc4995bb-wc7zb&resourceVersion=1778980411796859000&timeoutSeconds=575&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/pxc-client-67fc4995bb-wc7zb to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-67fc4995bb-wc7zb condition met E0517 01:14:09.405594 26705 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/security-context-8393/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpxc-client-67fc4995bb-wc7zb&resourceVersion=1778980447465210000&timeoutSeconds=575&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/pxc-client-67fc4995bb-wc7zb to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-67fc4995bb-wc7zb condition met E0517 01:14:15.161147 27439 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/security-context-8393/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpxc-client-67fc4995bb-wc7zb&resourceVersion=1778980454374365000&timeoutSeconds=476&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/pxc-client-67fc4995bb-wc7zb to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-67fc4995bb-wc7zb condition met E0517 01:14:21.463868 28056 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/security-context-8393/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpxc-client-67fc4995bb-wc7zb&resourceVersion=1778980460697230000&timeoutSeconds=305&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/pxc-client-67fc4995bb-wc7zb to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok ----------------------------------------------------------------------------------- check if service and statefulset created with expected config ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/sec-context-pxc- ----------------------------------------------------------------------------------- [2026-05-17T01:14:27+0000] compare_kubectl: statefulset/sec-context-pxc OK ----------------------------------------------------------------------------------- compare statefulset/sec-context-proxysql- ----------------------------------------------------------------------------------- [2026-05-17T01:14:28+0000] compare_kubectl: statefulset/sec-context-proxysql OK ----------------------------------------------------------------------------------- change security context in PXC cluster ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/sec-context configured ----------------------------------------------------------------------------------- check if service and statefulset changed to expected config ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/sec-context-pxc--changes ----------------------------------------------------------------------------------- [2026-05-17T01:15:04+0000] compare_kubectl: statefulset/sec-context-pxc OK ----------------------------------------------------------------------------------- compare statefulset/sec-context-proxysql--changes ----------------------------------------------------------------------------------- [2026-05-17T01:15:06+0000] compare_kubectl: statefulset/sec-context-proxysql OK ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for pxc/sec-context to be ready...................... ----------------------------------------------------------------------------------- run pvc backup ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/on-demand-backup-pvc created waiting for pxc-backup/on-demand-backup-pvc to reach Succeeded state....................Succeeded /mnt/jenkins/workspace/cloud-pxc-operator_PR-2473/e2e-tests/security-context/compare/ ----------------------------------------------------------------------------------- compare job.batch/xb-on-demand-backup-pvc- ----------------------------------------------------------------------------------- [2026-05-17T01:18:32+0000] compare_kubectl: job.batch/xb-on-demand-backup-pvc OK Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2473-6d392bea-4-cluster6" modified. ----------------------------------------------------------------------------------- run pvc restore ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/restore-pvc created Error from server (NotFound): pods "restore-src-restore-pvc-sec-context" not found waiting for pod/restore-src-restore-pvc-sec-context to become Ready..........................Defaulted container "ncat" out of: ncat, backup-init (init) .Ok apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: privileged creationTimestamp: "2026-05-17T01:19:36Z" generation: 1 labels: app.kubernetes.io/instance: sec-context app.kubernetes.io/managed-by: percona-xtradb-cluster-operator app.kubernetes.io/name: percona-xtradb-cluster app.kubernetes.io/part-of: percona-xtradb-cluster percona.com/restore-svc-name: restore-src-restore-pvc-sec-context name: restore-src-restore-pvc-sec-context namespace: security-context-8393 ownerReferences: - apiVersion: pxc.percona.com/v1 blockOwnerDeletion: true controller: true kind: PerconaXtraDBClusterRestore name: restore-pvc uid: 87374a28-16b8-4850-a714-c5fbc4f32111 resourceVersion: "1778980789724351016" uid: f1393b91-24fb-4e1f-8d5c-d343d3266926 spec: containers: - command: - /opt/percona/backup/recovery-pvc-donor.sh image: perconalab/percona-xtradb-cluster-operator:main-pxc8.0-backup imagePullPolicy: Always name: ncat resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /backup name: backup - mountPath: /etc/mysql/ssl name: ssl - mountPath: /etc/mysql/ssl-internal name: ssl-internal - mountPath: /etc/mysql/vault-keyring-secret name: vault-keyring-secret - mountPath: /opt/percona name: bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-99vp4 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - command: - /backup-init-entrypoint.sh image: perconalab/percona-xtradb-cluster-operator:PR-2473-6d392bea imagePullPolicy: Always name: backup-init resources: limits: cpu: 50m memory: 50M requests: cpu: 50m memory: 50M securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /opt/percona name: bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-99vp4 readOnly: true nodeName: gke-jen-pxc-2473-6d392be-default-pool-09ad5468-tb5c preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1001 supplementalGroups: - 1001 - 1002 - 1003 serviceAccount: percona-xtradb-cluster-operator-workload serviceAccountName: percona-xtradb-cluster-operator-workload terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: backup persistentVolumeClaim: claimName: xb-on-demand-backup-pvc-20260517011746-ade5d3ac - name: ssl-internal secret: defaultMode: 420 optional: true secretName: some-name-ssl-internal - name: ssl secret: defaultMode: 420 optional: false secretName: some-name-ssl - name: vault-keyring-secret secret: defaultMode: 420 optional: true secretName: sec-context-vault - emptyDir: {} name: bin - name: kube-api-access-99vp4 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2026-05-17T01:19:47Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2026-05-17T01:19:48Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2026-05-17T01:19:49Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2026-05-17T01:19:49Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2026-05-17T01:19:36Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://b90c31141e7978caeafd68f6ef3e4bffd3291321a880f4a167177efaa11334ec image: docker.io/perconalab/percona-xtradb-cluster-operator:main-pxc8.0-backup imageID: docker.io/perconalab/percona-xtradb-cluster-operator@sha256:63fb184385f000c63fd08e4185b973aa0b7ceb2b4a1b48e4ffb2aa9fd49df232 lastState: {} name: ncat ready: true resources: {} restartCount: 0 started: true state: running: startedAt: "2026-05-17T01:19:48Z" user: linux: gid: 1001 supplementalGroups: - 1001 - 1002 - 1003 uid: 1001 volumeMounts: - mountPath: /backup name: backup - mountPath: /etc/mysql/ssl name: ssl - mountPath: /etc/mysql/ssl-internal name: ssl-internal - mountPath: /etc/mysql/vault-keyring-secret name: vault-keyring-secret - mountPath: /opt/percona name: bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-99vp4 readOnly: true recursiveReadOnly: Disabled hostIP: 10.213.0.22 hostIPs: - ip: 10.213.0.22 initContainerStatuses: - allocatedResources: cpu: 50m memory: 50M containerID: containerd://e67dc1e358806dd113eb9e239db49cddedfbecb9bcc28b90a493160181d327bc image: docker.io/perconalab/percona-xtradb-cluster-operator:PR-2473-6d392bea imageID: docker.io/perconalab/percona-xtradb-cluster-operator@sha256:39afd18630224b4e3e8cd2cec47cd7563b5db69ebc7a7efe66fb6341afece99e lastState: {} name: backup-init ready: true resources: limits: cpu: 50m memory: 50M requests: cpu: 50m memory: 50M restartCount: 0 started: false state: terminated: containerID: containerd://e67dc1e358806dd113eb9e239db49cddedfbecb9bcc28b90a493160181d327bc exitCode: 0 finishedAt: "2026-05-17T01:19:48Z" reason: Completed startedAt: "2026-05-17T01:19:47Z" user: linux: gid: 2 supplementalGroups: - 2 - 1001 - 1002 - 1003 uid: 2 volumeMounts: - mountPath: /opt/percona name: bin - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-99vp4 readOnly: true recursiveReadOnly: Disabled phase: Running podIP: 10.97.234.67 podIPs: - ip: 10.97.234.67 qosClass: Burstable startTime: "2026-05-17T01:19:36Z" ----------------------------------------------------------------------------------- compare pod/restore-src-restore-pvc-sec-context- ----------------------------------------------------------------------------------- [2026-05-17T01:20:02+0000] compare_kubectl: pod/restore-src-restore-pvc-sec-context OK waiting for pxc-restore/restore-pvc to reach Succeeded state 2026-05-17T01:20:04 pxc-restore/restore-pvc state: Restoring 2026-05-17T01:20:07 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:10 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:14 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:17 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:21 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:24 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:27 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:30 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:33 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:36 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:40 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:43 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:47 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:51 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:55 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:20:59 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:02 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:06 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:09 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:12 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:15 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:19 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:22 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:26 pxc-restore/restore-pvc state: Preparing Cluster 2026-05-17T01:21:29 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:32 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:35 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:38 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:41 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:45 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:49 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:52 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:54 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:21:58 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:01 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:03 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:06 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:09 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:12 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:15 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:18 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:22 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:25 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:28 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:31 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:34 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:37 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:41 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:44 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:47 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:50 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:54 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:56 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:22:59 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:02 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:05 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:08 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:10 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:13 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:16 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:19 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:22 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:24 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:27 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:30 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:32 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:34 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:37 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:39 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:41 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:44 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:46 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:49 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:51 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:54 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:56 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:23:59 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:02 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:05 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:07 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:10 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:14 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:16 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:19 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:21 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:24 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:26 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:28 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:31 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:33 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:36 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:38 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:41 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:44 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:46 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:49 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:52 pxc-restore/restore-pvc state: Starting Cluster 2026-05-17T01:24:55 pxc-restore/restore-pvc state: Succeeded ----------------------------------------------------------------------------------- compare job.batch/restore-job-restore-pvc-sec-context- ----------------------------------------------------------------------------------- [2026-05-17T01:24:58+0000] compare_kubectl: job.batch/restore-job-restore-pvc-sec-context OK ----------------------------------------------------------------------------------- run s3 backup ----------------------------------------------------------------------------------- secret/minio-secret unchanged "hashicorp" already exists with the same configuration, skipping "minio" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "hashicorp" chart repository ...Successfully got an update from the "percona" chart repository ...Successfully got an update from the "chaos-mesh" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found NAME: minio-service LAST DEPLOYED: Sun May 17 01:25:05 2026 NAMESPACE: security-context-8393 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.security-context-8393.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace security-context-8393 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace security-context-8393 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace security-context-8393 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace security-context-8393 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local pod/minio-service-5fd5489bdc-jvcc4 condition met E0517 01:26:10.769172 22160 reflector.go:227] "Failed to watch" err="Get \"https://35.225.104.245/api/v1/namespaces/security-context-8393/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dminio-service-5fd5489bdc-jvcc4&resourceVersion=1778981168913955000&timeoutSeconds=392&watch=true\": context canceled" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" type="*unstructured.Unstructured" waiting for pod/minio-service-5fd5489bdc-jvcc4 to become Ready.Ok make_bucket: operator-testing make_bucket: operator-testing pod "aws-cli" deleted from security-context-8393 namespace All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt. If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: Internal error occurred: unable to upgrade connection: container aws-cli not found in pod aws-cli_security-context-8393 ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for pxc/sec-context to be ready perconaxtradbclusterbackup.pxc.percona.com/on-demand-backup-s3 created waiting for pxc-backup/on-demand-backup-s3 to reach Succeeded state..............Succeeded ----------------------------------------------------------------------------------- compare job.batch/xb-on-demand-backup-s3- ----------------------------------------------------------------------------------- [2026-05-17T01:27:13+0000] compare_kubectl: job.batch/xb-on-demand-backup-s3 OK ----------------------------------------------------------------------------------- run s3 restore ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/restore-s3 created waiting for pxc-restore/restore-s3 to reach Succeeded state 2026-05-17T01:27:20 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:22 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:25 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:27 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:30 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:32 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:35 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:37 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:40 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:43 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:45 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:49 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:52 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:55 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:27:58 pxc-restore/restore-s3 state: Stopping Cluster 2026-05-17T01:28:01 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:03 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:06 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:08 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:11 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:13 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:16 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:19 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:22 pxc-restore/restore-s3 state: Restoring 2026-05-17T01:28:25 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:28 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:30 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:33 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:36 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:39 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:42 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:46 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:49 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:53 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:28:57 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:01 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:05 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:09 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:13 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:17 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:20 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:23 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:26 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:29 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:32 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:35 pxc-restore/restore-s3 state: Preparing Cluster 2026-05-17T01:29:37 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:40 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:44 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:47 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:49 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:52 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:54 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:29:57 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:00 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:02 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:05 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:08 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:11 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:13 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:16 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:19 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:22 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:25 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:29 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:32 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:35 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:38 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:41 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:44 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:47 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:50 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:52 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:55 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:30:58 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:02 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:05 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:08 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:11 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:13 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:15 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:18 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:21 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:23 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:26 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:29 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:32 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:34 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:37 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:40 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:43 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:46 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:48 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:53 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:56 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:31:59 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:03 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:06 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:11 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:14 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:17 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:20 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:23 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:26 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:29 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:33 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:36 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:40 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:42 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:45 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:48 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:51 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:54 pxc-restore/restore-s3 state: Starting Cluster 2026-05-17T01:32:58 pxc-restore/restore-s3 state: Succeeded ----------------------------------------------------------------------------------- compare job.batch/restore-job-restore-s3-sec-context- ----------------------------------------------------------------------------------- [2026-05-17T01:33:01+0000] compare_kubectl: job.batch/restore-job-restore-s3-sec-context OK ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- + kubectl patch pxc -n security-context-8393 sec-context --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/sec-context patched perconaxtradbcluster.pxc.percona.com "sec-context" deleted from security-context-8393 namespace perconaxtradbclusterbackup.pxc.percona.com "on-demand-backup-pvc" deleted from security-context-8393 namespace perconaxtradbclusterbackup.pxc.percona.com "on-demand-backup-s3" deleted from security-context-8393 namespace perconaxtradbclusterrestore.pxc.percona.com "restore-pvc" deleted from security-context-8393 namespace perconaxtradbclusterrestore.pxc.percona.com "restore-s3" deleted from security-context-8393 namespace validatingwebhookconfiguration.admissionregistration.k8s.io "percona-xtradbcluster-webhook" deleted namespace "cert-manager" deleted customresourcedefinition.apiextensions.k8s.io "challenges.acme.cert-manager.io" deleted customresourcedefinition.apiextensions.k8s.io "orders.acme.cert-manager.io" deleted customresourcedefinition.apiextensions.k8s.io "certificaterequests.cert-manager.io" deleted customresourcedefinition.apiextensions.k8s.io "certificates.cert-manager.io" deleted customresourcedefinition.apiextensions.k8s.io "clusterissuers.cert-manager.io" deleted customresourcedefinition.apiextensions.k8s.io "issuers.cert-manager.io" deleted serviceaccount "cert-manager-cainjector" deleted from cert-manager namespace serviceaccount "cert-manager" deleted from cert-manager namespace serviceaccount "cert-manager-webhook" deleted from cert-manager namespace clusterrole.rbac.authorization.k8s.io "cert-manager-cainjector" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-issuers" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-clusterissuers" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-certificates" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-orders" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-challenges" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-ingress-shim" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-cluster-view" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-view" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-edit" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-approve:cert-manager-io" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-controller-certificatesigningrequests" deleted clusterrole.rbac.authorization.k8s.io "cert-manager-webhook:subjectaccessreviews" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-cainjector" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-issuers" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-clusterissuers" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-certificates" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-orders" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-challenges" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-ingress-shim" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-approve:cert-manager-io" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-controller-certificatesigningrequests" deleted clusterrolebinding.rbac.authorization.k8s.io "cert-manager-webhook:subjectaccessreviews" deleted role.rbac.authorization.k8s.io "cert-manager-cainjector:leaderelection" deleted from kube-system namespace role.rbac.authorization.k8s.io "cert-manager:leaderelection" deleted from kube-system namespace role.rbac.authorization.k8s.io "cert-manager-tokenrequest" deleted from cert-manager namespace role.rbac.authorization.k8s.io "cert-manager-webhook:dynamic-serving" deleted from cert-manager namespace rolebinding.rbac.authorization.k8s.io "cert-manager-cainjector:leaderelection" deleted from kube-system namespace rolebinding.rbac.authorization.k8s.io "cert-manager:leaderelection" deleted from kube-system namespace rolebinding.rbac.authorization.k8s.io "cert-manager-tokenrequest" deleted from cert-manager namespace rolebinding.rbac.authorization.k8s.io "cert-manager-webhook:dynamic-serving" deleted from cert-manager namespace service "cert-manager-cainjector" deleted from cert-manager namespace service "cert-manager" deleted from cert-manager namespace service "cert-manager-webhook" deleted from cert-manager namespace deployment.apps "cert-manager-cainjector" deleted from cert-manager namespace deployment.apps "cert-manager" deleted from cert-manager namespace deployment.apps "cert-manager-webhook" deleted from cert-manager namespace mutatingwebhookconfiguration.admissionregistration.k8s.io "cert-manager-webhook" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "cert-manager-webhook" deleted ----------------------------------------------------------------------------------- test passed -----------------------------------------------------------------------------------