Log: /mnt/jenkins/workspace/cloud-psmdb-operator_PR-2297/e2e-tests/logs/demand-backup-physical-minio-native-tls.log Warning: version difference between client (1.35) and server (1.32) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.35) and server (1.32) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.35) and server (1.32) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- grep: warning: stray \ before - grep: warning: stray \ before - error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbbackups" error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces psmdb-operator ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace psmdb-operator ----------------------------------------------------------------------------------- namespace/psmdb-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-2297-08eff1c6-1-cluster10" modified. ----------------------------------------------------------------------------------- start PSMDB operator: docker.io/perconalab/percona-server-mongodb-operator:PR-2297-08eff1c6 ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created waiting for pod/percona-server-mongodb-operator-64f9d5dc56-75bv4 to be ready.OK Print operator info from log 2026-04-01T01:57:53.091Z INFO setup Manager starting up {"gitCommit": "08eff1c6bedacf4504a02c717fb227e5fbf4d099", "gitBranch": "PR-2297-08eff1c6", "buildTime": "", "goVersion": "go1.25.8", "os": "linux", "arch": "amd64"} ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces demand-backup-physical-minio-native-tls-7311 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace demand-backup-physical-minio-native-tls-7311 ----------------------------------------------------------------------------------- namespace/demand-backup-physical-minio-native-tls-7311 created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-2297-08eff1c6-1-cluster10" modified. ----------------------------------------------------------------------------------- Deploy cert-manager for proper certificate generation ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- deploy cert manager ----------------------------------------------------------------------------------- namespace/cert-manager created namespace/cert-manager labeled namespace/cert-manager configured customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-tokenrequest created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-tokenrequest created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager-cainjector created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created Warning: resource namespaces/cert-manager is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/cert-manager-559d798845-bnxn8 condition met pod/cert-manager-cainjector-64958d9c7c-npxw4 condition met pod/cert-manager-webhook-7fb6f99b56-hrxdd condition met ----------------------------------------------------------------------------------- Create self-signed issuer ----------------------------------------------------------------------------------- issuer.cert-manager.io/selfsigned-issuer created ----------------------------------------------------------------------------------- Generate certificate for first MinIO via cert-manager ----------------------------------------------------------------------------------- certificate.cert-manager.io/minio-cert created Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`. certificate.cert-manager.io/minio-cert condition met secret/minio-tls created secret/minio-ca-bundle created ----------------------------------------------------------------------------------- Deploy first MinIO with TLS ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- install MinIO: minio-service ----------------------------------------------------------------------------------- "minio" has been removed from your repositories "minio" has been added to your repositories NAME: minio-service LAST DEPLOYED: Wed Apr 1 02:01:09 2026 NAMESPACE: demand-backup-physical-minio-native-tls-7311 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.demand-backup-physical-minio-native-tls-7311.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace demand-backup-physical-minio-native-tls-7311 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace demand-backup-physical-minio-native-tls-7311 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace demand-backup-physical-minio-native-tls-7311 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace demand-backup-physical-minio-native-tls-7311 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local waiting for pod/minio-service-6f4667bf4c-9vx5h to be ready.OK service/minio-service created /usr/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html InsecureRequestWarning) make_bucket: operator-testing pod "aws-cli" deleted from demand-backup-physical-minio-native-tls-7311 namespace All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt. If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: Internal error occurred: unable to upgrade connection: container aws-cli not found in pod aws-cli_demand-backup-physical-minio-native-tls-7311 ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created secret/gcp-cs-sa-key-secret created secret/secondary-minio-secret created secret/some-users created deployment.apps/psmdb-client created perconaservermongodb.psmdb.percona.com/some-name created waiting for pod/some-name-rs0-0 to be ready......OK waiting for pod/some-name-rs0-1 to be ready.......OK waiting for pod/some-name-rs0-2 to be ready.....OK Waiting for cluster readyness...... waiting for cluster readyness.OK ----------------------------------------------------------------------------------- Verify CA bundle infrastructure ----------------------------------------------------------------------------------- SSL_CERT_FILE=/etc/s3/certs/ca-bundle.crt ----------------------------------------------------------------------------------- Generate certificate for secondary MinIO ----------------------------------------------------------------------------------- certificate.cert-manager.io/secondary-minio-cert created Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`. certificate.cert-manager.io/secondary-minio-cert condition met secret/secondary-minio-tls created secret/secondary-minio-ca-bundle created ----------------------------------------------------------------------------------- install MinIO: secondary-minio-service ----------------------------------------------------------------------------------- "minio" has been removed from your repositories "minio" has been added to your repositories NAME: secondary-minio-service LAST DEPLOYED: Wed Apr 1 02:03:46 2026 NAMESPACE: demand-backup-physical-minio-native-tls-7311 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: secondary-minio-service.demand-backup-physical-minio-native-tls-7311.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace demand-backup-physical-minio-native-tls-7311 -l "release=secondary-minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace demand-backup-physical-minio-native-tls-7311 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_secondary-minio-service-local=http://$(kubectl get secret --namespace demand-backup-physical-minio-native-tls-7311 secondary-minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace demand-backup-physical-minio-native-tls-7311 secondary-minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls secondary-minio-service-local waiting for pod/secondary-minio-service-656846748c-jgrdd to be ready.OK service/secondary-minio-service created /usr/lib/python2.7/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html InsecureRequestWarning) make_bucket: operator-testing pod "aws-cli" deleted from demand-backup-physical-minio-native-tls-7311 namespace All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt. If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: Internal error occurred: Internal error occurred: error attaching to container: failed to load task: no running task found: task bae0fefe5565f13d7c8870721dd73de0a0ed20976ebb934748273a9f8ba8e2c4 not found: not found ----------------------------------------------------------------------------------- Add second storage (secondary) ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name patched waiting for cluster readyness.OK ----------------------------------------------------------------------------------- Verify merged CA bundle ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Backup & restore with merged CA ----------------------------------------------------------------------------------- waiting for PBM operation to finish.OK error: cannot exec into a container in a completed pod; current phase is Failed error: cannot exec into a container in a completed pod; current phase is Failed error: cannot exec into a container in a completed pod; current phase is Failed error: cannot exec into a container in a completed pod; current phase is Failed