Log: /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/logs/demand-backup-physical.log E0507 10:57:53.438954 31106 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:53.666736 31106 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:53.780177 31106 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:53.890511 31106 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- E0507 10:57:57.085204 31616 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:57.320711 31616 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:58.915313 31751 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:59.351021 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:59.462136 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:59.572993 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:57:59.910806 31751 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:00.131702 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:00.244593 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:00.355887 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:00.466949 31751 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E0507 10:58:01.572401 31967 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:01.803757 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:01.914991 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:02.026573 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:02.381836 31967 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:02.592978 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:02.707914 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:02.821950 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:02.933217 31967 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbbackups" E0507 10:58:04.228513 32121 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:04.452308 32121 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:04.563717 32121 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:04.674751 32121 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:06.047127 32298 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:06.170471 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:06.281274 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:06.391929 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:06.737070 32298 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:06.946276 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:07.059661 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:07.176323 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:07.299768 32298 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E0507 10:58:08.491020 32519 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:08.822843 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:08.935273 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:09.047207 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:09.386281 32519 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:09.615718 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:09.732620 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:09.844880 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:09.956702 32519 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbrestores" E0507 10:58:11.177820 396 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:11.477552 396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:11.588378 396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:11.699238 396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:13.053142 655 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:13.287778 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:13.398466 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:13.509996 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:13.843723 655 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:14.070083 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:14.186451 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:14.297256 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:14.408032 655 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E0507 10:58:15.536186 961 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:15.865977 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:15.976432 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:16.086928 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:16.428460 961 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:16.645486 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:16.758657 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:16.870318 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:16.981015 961 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbs" E0507 10:58:18.260339 1278 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:18.523657 1278 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:18.634472 1278 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:18.745000 1278 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:19.953796 1461 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:20.187980 1461 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:22.203904 1758 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:22.534545 1758 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:22.647196 1758 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found E0507 10:58:24.392788 2112 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:24.613895 2112 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:24.725025 2112 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found E0507 10:58:30.459639 2707 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:30.573129 2707 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:30.684680 2707 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found E0507 10:58:30.459639 2707 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:30.573129 2707 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:30.684680 2707 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- E0507 10:58:42.206557 4172 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:42.537268 4172 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:42.649499 4172 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:42.777031 4172 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:44.570494 4532 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:44.900417 4532 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.013102 4532 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.124443 4532 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0507 10:58:46.490675 4801 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:46.728797 4801 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:46.840945 4801 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:46.952662 4801 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:48.665170 5118 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:48.991370 5118 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:49.102135 5118 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:49.212874 5118 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0507 10:58:50.499521 5432 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.826171 5432 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.938274 5432 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:51.052242 5432 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:53.031770 5700 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:53.357501 5700 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:53.468007 5700 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:53.578632 5700 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0507 10:58:54.905148 6078 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:55.131628 6078 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces psmdb-operator ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace psmdb-operator ----------------------------------------------------------------------------------- namespace/psmdb-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1543-c8a64a78-9-cluster9" modified. ----------------------------------------------------------------------------------- start PSMDB operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created waiting for pod/percona-server-mongodb-operator-fdfdd955d-xhmhv to be ready.OK ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces demand-backup-physical-18487 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace demand-backup-physical-18487 ----------------------------------------------------------------------------------- namespace/demand-backup-physical-18487 created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1543-c8a64a78-9-cluster9" modified. ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found Error: no repo named "minio" found "minio" has been added to your repositories NAME: minio-service LAST DEPLOYED: Tue May 7 10:59:49 2024 NAMESPACE: demand-backup-physical-18487 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.demand-backup-physical-18487.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace demand-backup-physical-18487 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace demand-backup-physical-18487 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace demand-backup-physical-18487 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace demand-backup-physical-18487 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local waiting for pod/minio-service-57dd49b-sn2j9 to be ready.OK service/minio-service created make_bucket: operator-testing pod "aws-cli" deleted If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: unable to upgrade connection: container aws-cli not found in pod aws-cli_demand-backup-physical-18487 ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- Testing on not sharded cluster ----------------------------------------------------------------------------------- Creating PSMDB cluster secret/some-users created perconaservermongodb.psmdb.percona.com/some-name created deployment.apps/psmdb-client created check if all pods started waiting for pod/some-name-rs0-0 to be ready..................OK waiting for pod/some-name-rs0-1 to be ready.............OK waiting for pod/some-name-rs0-2 to be ready................OK Waiting for cluster readyness. waiting for cluster readynesswriting test data Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("ff421dc7-ce04-4cd8-ad23-d970b2000444") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("db255815-6fe5-4493-b10a-78458c2bc577") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye running backups perconaservermongodbbackup.psmdb.percona.com/backup-minio created perconaservermongodbbackup.psmdb.percona.com/backup-aws-s3 created perconaservermongodbbackup.psmdb.percona.com/backup-gcp-cs created perconaservermongodbbackup.psmdb.percona.com/backup-azure-blob created backup-aws-s3.................................. backup-gcp-cs..................... backup-azure-blob.................. backup-minio. drop collection Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("44dcc6c4-e9bc-4e81-9252-bc3b88334bff") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye check backup and restore -- aws-s3 perconaservermongodbrestore.psmdb.percona.com/restore-backup-aws-s3 created waiting psmdb-restore/backup-aws-s3 to reach requested state............................................................................................................................. + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore + local resource=statefulset/some-name-rs0 + local postfix=_restore + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml + local new_result=/tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 ++ mktemp + local LAST_OUT=/tmp/tmp.zuvHUA2Aao ++ mktemp + local LAST_ERR=/tmp/tmp.vPZbUR52vW + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-18487", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.zuvHUA2Aao + cat /tmp/tmp.vPZbUR52vW + rm /tmp/tmp.zuvHUA2Aao /tmp/tmp.vPZbUR52vW + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + wait_restore backup-aws-s3 some-name ready 0 1800 + local backup_name=backup-aws-s3 + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-aws-s3 to reach ready state.................................................. + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.r2GpXrDb2B ++ mktemp + local LAST_ERR=/tmp/tmp.pk5r488aAr + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.r2GpXrDb2B apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"finalizers":["delete-psmdb-pvc"],"name":"some-name","namespace":"demand-backup-physical-18487"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:00:34Z" finalizers: - delete-psmdb-pvc generation: 2 name: some-name namespace: demand-backup-physical-18487 resourceVersion: "8970" uid: 4d7d694a-2c95-4365-8bdd-524a1b907203 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:00:36Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:02:40Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:02:40Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:02:46Z" status: "True" type: ready - lastTransitionTime: "2024-05-07T11:05:59Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:06:31Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:06:30Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:07:03Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:07:03Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:07:35Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:07:35Z" status: "True" type: initializing host: some-name-rs0.demand-backup-physical-18487.svc.cluster.local mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 observedGeneration: 2 ready: 0 replsets: rs0: initialized: true ready: 0 size: 3 status: initializing size: 3 state: initializing + cat /tmp/tmp.pk5r488aAr + rm /tmp/tmp.r2GpXrDb2B /tmp/tmp.pk5r488aAr + return 0 ++ yq '.metadata.annotations."percona.com/resync-pbm"' ++ kubectl_bin get psmdb some-name -o yaml +++ mktemp ++ local LAST_OUT=/tmp/tmp.Ey2C3LNO5z +++ mktemp ++ local LAST_ERR=/tmp/tmp.xFu6TXBoKb ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Ey2C3LNO5z ++ cat /tmp/tmp.xFu6TXBoKb ++ rm /tmp/tmp.Ey2C3LNO5z /tmp/tmp.xFu6TXBoKb ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.KfgRso9Pnm +++ mktemp ++ local LAST_ERR=/tmp/tmp.6IedthuQiD ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.KfgRso9Pnm ++ cat /tmp/tmp.6IedthuQiD ++ rm /tmp/tmp.KfgRso9Pnm /tmp/tmp.6IedthuQiD ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.tCttXPhghP +++ mktemp ++ local LAST_ERR=/tmp/tmp.EwaTcr8pqt ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.tCttXPhghP ++ cat /tmp/tmp.EwaTcr8pqt ++ rm /tmp/tmp.tCttXPhghP /tmp/tmp.EwaTcr8pqt ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.1V1LnVRNOD +++ mktemp ++ local LAST_ERR=/tmp/tmp.rzTwEtZQKF ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.1V1LnVRNOD ++ cat /tmp/tmp.rzTwEtZQKF ++ rm /tmp/tmp.1V1LnVRNOD /tmp/tmp.rzTwEtZQKF ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.92nsgALPO4 +++ mktemp ++ local LAST_ERR=/tmp/tmp.mn3sdu5SuZ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.92nsgALPO4 ++ cat /tmp/tmp.mn3sdu5SuZ ++ rm /tmp/tmp.92nsgALPO4 /tmp/tmp.mn3sdu5SuZ ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.UtqQFDeygt +++ mktemp ++ local LAST_ERR=/tmp/tmp.pPaPhFMh8C ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.UtqQFDeygt ++ cat /tmp/tmp.pPaPhFMh8C ++ rm /tmp/tmp.UtqQFDeygt /tmp/tmp.pPaPhFMh8C ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.hSFwvpD2Ad +++ mktemp ++ local LAST_ERR=/tmp/tmp.KoLDli74WV ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.hSFwvpD2Ad ++ cat /tmp/tmp.KoLDli74WV ++ rm /tmp/tmp.hSFwvpD2Ad /tmp/tmp.KoLDli74WV ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.dzBpTlP4PF +++ mktemp ++ local LAST_ERR=/tmp/tmp.yKLiRtmuk7 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.dzBpTlP4PF ++ cat /tmp/tmp.yKLiRtmuk7 ++ rm /tmp/tmp.dzBpTlP4PF /tmp/tmp.yKLiRtmuk7 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.IAYoaRR0zU +++ mktemp ++ local LAST_ERR=/tmp/tmp.VsKnXD6BlC ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.IAYoaRR0zU ++ cat /tmp/tmp.VsKnXD6BlC ++ rm /tmp/tmp.IAYoaRR0zU /tmp/tmp.VsKnXD6BlC ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.tBEPnEHPyB +++ mktemp ++ local LAST_ERR=/tmp/tmp.g5MCnMmwd4 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.tBEPnEHPyB ++ cat /tmp/tmp.g5MCnMmwd4 ++ rm /tmp/tmp.tBEPnEHPyB /tmp/tmp.g5MCnMmwd4 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.iD4UUpyp6C +++ mktemp ++ local LAST_ERR=/tmp/tmp.9Fj0YkTsu8 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.iD4UUpyp6C ++ cat /tmp/tmp.9Fj0YkTsu8 ++ rm /tmp/tmp.iD4UUpyp6C /tmp/tmp.9Fj0YkTsu8 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ANU4IIT5yl +++ mktemp ++ local LAST_ERR=/tmp/tmp.cViZ19FDU6 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ANU4IIT5yl ++ cat /tmp/tmp.cViZ19FDU6 ++ rm /tmp/tmp.ANU4IIT5yl /tmp/tmp.cViZ19FDU6 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.LsdbttPy3c +++ mktemp ++ local LAST_ERR=/tmp/tmp.BnvdqmX7eg ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LsdbttPy3c ++ cat /tmp/tmp.BnvdqmX7eg ++ rm /tmp/tmp.LsdbttPy3c /tmp/tmp.BnvdqmX7eg ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.6mQhaxTONm +++ mktemp ++ local LAST_ERR=/tmp/tmp.wuePVcxVLu ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.6mQhaxTONm ++ cat /tmp/tmp.wuePVcxVLu ++ rm /tmp/tmp.6mQhaxTONm /tmp/tmp.wuePVcxVLu ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.qI4kKGll9N +++ mktemp ++ local LAST_ERR=/tmp/tmp.qq82njwkT0 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.qI4kKGll9N ++ cat /tmp/tmp.qq82njwkT0 ++ rm /tmp/tmp.qI4kKGll9N /tmp/tmp.qq82njwkT0 ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' +++ mktemp ++ local LAST_OUT=/tmp/tmp.HZTNUaJk1S +++ mktemp ++ local LAST_ERR=/tmp/tmp.5or3cQh5BG ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.HZTNUaJk1S ++ cat /tmp/tmp.5or3cQh5BG ++ rm /tmp/tmp.HZTNUaJk1S /tmp/tmp.5or3cQh5BG ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.9gmTZBW3ux ++ mktemp + local LAST_ERR=/tmp/tmp.7dAVLJCxgM + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.9gmTZBW3ux + cat /tmp/tmp.7dAVLJCxgM + rm /tmp/tmp.9gmTZBW3ux /tmp/tmp.7dAVLJCxgM + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.KUAyTrfnhM +++ mktemp ++ local LAST_ERR=/tmp/tmp.NLBiZLVf9j ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.KUAyTrfnhM ++ cat /tmp/tmp.NLBiZLVf9j ++ rm /tmp/tmp.KUAyTrfnhM /tmp/tmp.NLBiZLVf9j ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.AEawFPdiCi ++ mktemp + local LAST_ERR=/tmp/tmp.iOPNdzyi6o + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.AEawFPdiCi + cat /tmp/tmp.iOPNdzyi6o + rm /tmp/tmp.AEawFPdiCi /tmp/tmp.iOPNdzyi6o + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.yxVbiDn5b7 +++ mktemp ++ local LAST_ERR=/tmp/tmp.Oe6Sy0SbcM ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.yxVbiDn5b7 ++ cat /tmp/tmp.Oe6Sy0SbcM ++ rm /tmp/tmp.yxVbiDn5b7 /tmp/tmp.Oe6Sy0SbcM ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.n2m1uJJIP9 ++ mktemp + local LAST_ERR=/tmp/tmp.4RzKdtJe5g + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.n2m1uJJIP9 + cat /tmp/tmp.4RzKdtJe5g + rm /tmp/tmp.n2m1uJJIP9 /tmp/tmp.4RzKdtJe5g + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + echo + set -o xtrace + echo 'drop collection' drop collection + run_mongo 'use myApp\n db.test.drop()' myApp:myPass@some-name-rs0.demand-backup-physical-18487 + local 'command=use myApp\n db.test.drop()' + local uri=myApp:myPass@some-name-rs0.demand-backup-physical-18487 + local driver=mongodb+srv + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.115HcBNMnr +++ mktemp ++ local LAST_ERR=/tmp/tmp.FlE4bFhkDl ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.115HcBNMnr ++ cat /tmp/tmp.FlE4bFhkDl ++ rm /tmp/tmp.115HcBNMnr /tmp/tmp.FlE4bFhkDl ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb+srv://myApp:myPass@some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.K9JNqXicls ++ mktemp + local LAST_ERR=/tmp/tmp.cM7FNXK7X4 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb+srv://myApp:myPass@some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.K9JNqXicls Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("b11d0329-e1fc-4349-a5b2-c7aa8751fe49") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye + cat /tmp/tmp.cM7FNXK7X4 + rm /tmp/tmp.K9JNqXicls /tmp/tmp.cM7FNXK7X4 + return 0 + echo 'check backup and restore -- gcp-cs' check backup and restore -- gcp-cs + run_restore backup-gcp-cs + local backup_name=backup-gcp-cs + cat /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/conf/restore.yml + /usr/bin/sed -e 's/name:/name: restore-backup-gcp-cs/' + kubectl_bin apply -f - ++ mktemp + /usr/bin/sed -e 's/backupName:/backupName: backup-gcp-cs/' + local LAST_OUT=/tmp/tmp.MMksOVNWgd ++ mktemp + local LAST_ERR=/tmp/tmp.1AQ2QS0QuT + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl apply -f - + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.MMksOVNWgd perconaservermongodbrestore.psmdb.percona.com/restore-backup-gcp-cs created + cat /tmp/tmp.1AQ2QS0QuT + rm /tmp/tmp.MMksOVNWgd /tmp/tmp.1AQ2QS0QuT + return 0 + run_recovery_check backup-gcp-cs + local backup_name=backup-gcp-cs + local compare_suffix=_restore + wait_restore backup-gcp-cs some-name requested 0 1200 + local backup_name=backup-gcp-cs + local cluster_name=some-name + local target_state=requested + local wait_cluster_consistency=0 + local wait_time=1200 + set +o xtrace waiting psmdb-restore/backup-gcp-cs to reach requested state................................................................................................ + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore + local resource=statefulset/some-name-rs0 + local postfix=_restore + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml + local new_result=/tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-18487", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ mktemp + local LAST_OUT=/tmp/tmp.mTIkDy0h0o ++ mktemp + local LAST_ERR=/tmp/tmp.n96kNXe4pi + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.mTIkDy0h0o + cat /tmp/tmp.n96kNXe4pi + rm /tmp/tmp.mTIkDy0h0o /tmp/tmp.n96kNXe4pi + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + version_gt 1.22 ++ bc -l ++ echo '1.26 >= 1.22' + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + wait_restore backup-gcp-cs some-name ready 0 1800 + local backup_name=backup-gcp-cs + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-gcp-cs to reach ready state................................................. + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.SbyfAd0TRE ++ mktemp + local LAST_ERR=/tmp/tmp.wfRgpuMaJz + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.SbyfAd0TRE apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"finalizers":["delete-psmdb-pvc"],"name":"some-name","namespace":"demand-backup-physical-18487"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:00:34Z" finalizers: - delete-psmdb-pvc generation: 2 name: some-name namespace: demand-backup-physical-18487 resourceVersion: "13141" uid: 4d7d694a-2c95-4365-8bdd-524a1b907203 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:02:40Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:02:40Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:02:46Z" status: "True" type: ready - lastTransitionTime: "2024-05-07T11:05:59Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:06:31Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:06:30Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:07:03Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:07:03Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:07:35Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:07:35Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:14:29Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:14:29Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:14:30Z" status: "True" type: ready - lastTransitionTime: "2024-05-07T11:14:57Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:15:34Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:15:34Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:16:17Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:16:17Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:16:59Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:16:59Z" status: "True" type: initializing host: some-name-rs0.demand-backup-physical-18487.svc.cluster.local mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 observedGeneration: 2 ready: 0 replsets: rs0: initialized: true ready: 0 size: 3 status: initializing size: 3 state: initializing + cat /tmp/tmp.wfRgpuMaJz + rm /tmp/tmp.SbyfAd0TRE /tmp/tmp.wfRgpuMaJz + return 0 ++ kubectl_bin get psmdb some-name -o yaml ++ yq '.metadata.annotations."percona.com/resync-pbm"' +++ mktemp ++ local LAST_OUT=/tmp/tmp.sUa7IE9pmq +++ mktemp ++ local LAST_ERR=/tmp/tmp.xmNtosnYYa ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sUa7IE9pmq ++ cat /tmp/tmp.xmNtosnYYa ++ rm /tmp/tmp.sUa7IE9pmq /tmp/tmp.xmNtosnYYa ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.tZBOGk3VsT +++ mktemp ++ local LAST_ERR=/tmp/tmp.KrTiPdCvoa ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.tZBOGk3VsT ++ cat /tmp/tmp.KrTiPdCvoa ++ rm /tmp/tmp.tZBOGk3VsT /tmp/tmp.KrTiPdCvoa ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.L8qmpONekZ +++ mktemp ++ local LAST_ERR=/tmp/tmp.Frr31026cd ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.L8qmpONekZ ++ cat /tmp/tmp.Frr31026cd ++ rm /tmp/tmp.L8qmpONekZ /tmp/tmp.Frr31026cd ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.3O8qAoIjR1 +++ mktemp ++ local LAST_ERR=/tmp/tmp.wz9aEdmBzo ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.3O8qAoIjR1 ++ cat /tmp/tmp.wz9aEdmBzo ++ rm /tmp/tmp.3O8qAoIjR1 /tmp/tmp.wz9aEdmBzo ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.IkrdqhrTWw +++ mktemp ++ local LAST_ERR=/tmp/tmp.RpTTt9o4W8 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.IkrdqhrTWw ++ cat /tmp/tmp.RpTTt9o4W8 ++ rm /tmp/tmp.IkrdqhrTWw /tmp/tmp.RpTTt9o4W8 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.VsJuXaa7a6 +++ mktemp ++ local LAST_ERR=/tmp/tmp.PHXKqYBIBy ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.VsJuXaa7a6 ++ cat /tmp/tmp.PHXKqYBIBy ++ rm /tmp/tmp.VsJuXaa7a6 /tmp/tmp.PHXKqYBIBy ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.dUtOkZb0op +++ mktemp ++ local LAST_ERR=/tmp/tmp.8B8SV9LFZu ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.dUtOkZb0op ++ cat /tmp/tmp.8B8SV9LFZu ++ rm /tmp/tmp.dUtOkZb0op /tmp/tmp.8B8SV9LFZu ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.vMGkSeP4o5 +++ mktemp ++ local LAST_ERR=/tmp/tmp.Cj6t1tlv3Y ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.vMGkSeP4o5 ++ cat /tmp/tmp.Cj6t1tlv3Y ++ rm /tmp/tmp.vMGkSeP4o5 /tmp/tmp.Cj6t1tlv3Y ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.bfTIj2C8cq +++ mktemp ++ local LAST_ERR=/tmp/tmp.aWDYBxq8LU ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.bfTIj2C8cq ++ cat /tmp/tmp.aWDYBxq8LU ++ rm /tmp/tmp.bfTIj2C8cq /tmp/tmp.aWDYBxq8LU ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.OzUUGo6vFr +++ mktemp ++ local LAST_ERR=/tmp/tmp.K1k0WhowzC ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.OzUUGo6vFr ++ cat /tmp/tmp.K1k0WhowzC ++ rm /tmp/tmp.OzUUGo6vFr /tmp/tmp.K1k0WhowzC ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.54mND9ulca +++ mktemp ++ local LAST_ERR=/tmp/tmp.BwAGhZldAM ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.54mND9ulca ++ cat /tmp/tmp.BwAGhZldAM ++ rm /tmp/tmp.54mND9ulca /tmp/tmp.BwAGhZldAM ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.DTMk1JObU2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.cl9LgwP0o9 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.DTMk1JObU2 ++ cat /tmp/tmp.cl9LgwP0o9 ++ rm /tmp/tmp.DTMk1JObU2 /tmp/tmp.cl9LgwP0o9 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.aysCYmXgP6 +++ mktemp ++ local LAST_ERR=/tmp/tmp.qhyevceMNd ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.aysCYmXgP6 ++ cat /tmp/tmp.qhyevceMNd ++ rm /tmp/tmp.aysCYmXgP6 /tmp/tmp.qhyevceMNd ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.FuNDpxQ5ZY +++ mktemp ++ local LAST_ERR=/tmp/tmp.QFKc4R1G5x ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.FuNDpxQ5ZY ++ cat /tmp/tmp.QFKc4R1G5x ++ rm /tmp/tmp.FuNDpxQ5ZY /tmp/tmp.QFKc4R1G5x ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.7ZS5ZwdE9y +++ mktemp ++ local LAST_ERR=/tmp/tmp.wAuSWop2et ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.7ZS5ZwdE9y ++ cat /tmp/tmp.wAuSWop2et ++ rm /tmp/tmp.7ZS5ZwdE9y /tmp/tmp.wAuSWop2et ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 mongodb '' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.LF6RXpw5eq +++ mktemp ++ local LAST_ERR=/tmp/tmp.YRJ0HfZmg5 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LF6RXpw5eq ++ cat /tmp/tmp.YRJ0HfZmg5 ++ rm /tmp/tmp.LF6RXpw5eq /tmp/tmp.YRJ0HfZmg5 ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.Y3bsiXgBYG ++ mktemp + local LAST_ERR=/tmp/tmp.yH2TicAkjk + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.Y3bsiXgBYG + cat /tmp/tmp.yH2TicAkjk + rm /tmp/tmp.Y3bsiXgBYG /tmp/tmp.yH2TicAkjk + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.eKuqSipAuM +++ mktemp ++ local LAST_ERR=/tmp/tmp.quc7Q0KTcF ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.eKuqSipAuM ++ cat /tmp/tmp.quc7Q0KTcF ++ rm /tmp/tmp.eKuqSipAuM /tmp/tmp.quc7Q0KTcF ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.bNwdKPpJa6 ++ mktemp + local LAST_ERR=/tmp/tmp.eO23D0Cwgs + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.bNwdKPpJa6 + cat /tmp/tmp.eO23D0Cwgs + rm /tmp/tmp.bNwdKPpJa6 /tmp/tmp.eO23D0Cwgs + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' +++ mktemp ++ local LAST_OUT=/tmp/tmp.3r6NOnfe10 +++ mktemp ++ local LAST_ERR=/tmp/tmp.LemUgkwPBS ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.3r6NOnfe10 ++ cat /tmp/tmp.LemUgkwPBS ++ rm /tmp/tmp.3r6NOnfe10 /tmp/tmp.LemUgkwPBS ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.MnGxLkKWgX ++ mktemp + local LAST_ERR=/tmp/tmp.Vws4VnkUp4 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.MnGxLkKWgX + cat /tmp/tmp.Vws4VnkUp4 + rm /tmp/tmp.MnGxLkKWgX /tmp/tmp.Vws4VnkUp4 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + echo + set -o xtrace + echo 'drop collection' drop collection + run_mongo 'use myApp\n db.test.drop()' myApp:myPass@some-name-rs0.demand-backup-physical-18487 + local 'command=use myApp\n db.test.drop()' + local uri=myApp:myPass@some-name-rs0.demand-backup-physical-18487 + local driver=mongodb+srv + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.segLa8afkV +++ mktemp ++ local LAST_ERR=/tmp/tmp.f1Zzz8YSYD ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.segLa8afkV ++ cat /tmp/tmp.f1Zzz8YSYD ++ rm /tmp/tmp.segLa8afkV /tmp/tmp.f1Zzz8YSYD ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb+srv://myApp:myPass@some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.NBQUfDoCHK ++ mktemp + local LAST_ERR=/tmp/tmp.lpYT3ZZFTt + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb+srv://myApp:myPass@some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.NBQUfDoCHK Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("82e3eb99-afe9-4ebc-a5b3-d038dd3eccd6") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye + cat /tmp/tmp.lpYT3ZZFTt + rm /tmp/tmp.NBQUfDoCHK /tmp/tmp.lpYT3ZZFTt + return 0 + echo 'check backup and restore -- azure-blob' check backup and restore -- azure-blob + run_restore backup-azure-blob + local backup_name=backup-azure-blob + cat /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/conf/restore.yml + /usr/bin/sed -e 's/name:/name: restore-backup-azure-blob/' + kubectl_bin apply -f - + /usr/bin/sed -e 's/backupName:/backupName: backup-azure-blob/' ++ mktemp + local LAST_OUT=/tmp/tmp.cbWNu90QvK ++ mktemp + local LAST_ERR=/tmp/tmp.NFCbkd6kp4 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl apply -f - + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.cbWNu90QvK perconaservermongodbrestore.psmdb.percona.com/restore-backup-azure-blob created + cat /tmp/tmp.NFCbkd6kp4 + rm /tmp/tmp.cbWNu90QvK /tmp/tmp.NFCbkd6kp4 + return 0 + run_recovery_check backup-azure-blob + local backup_name=backup-azure-blob + local compare_suffix=_restore + wait_restore backup-azure-blob some-name requested 0 1200 + local backup_name=backup-azure-blob + local cluster_name=some-name + local target_state=requested + local wait_cluster_consistency=0 + local wait_time=1200 + set +o xtrace waiting psmdb-restore/backup-azure-blob to reach requested state............................................................................................. + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore + local resource=statefulset/some-name-rs0 + local postfix=_restore + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml + local new_result=/tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-oc.yml ']' + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-18487", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - + kubectl_bin get -o yaml statefulset/some-name-rs0 ++ mktemp + local LAST_OUT=/tmp/tmp.i4RrlqNuqW ++ mktemp + local LAST_ERR=/tmp/tmp.hfJDlURSm7 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.i4RrlqNuqW + cat /tmp/tmp.hfJDlURSm7 + rm /tmp/tmp.i4RrlqNuqW /tmp/tmp.hfJDlURSm7 + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + wait_restore backup-azure-blob some-name ready 0 1800 + local backup_name=backup-azure-blob + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-azure-blob to reach ready state................................................. + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.dHOb7fseK9 ++ mktemp + local LAST_ERR=/tmp/tmp.Nd59gduC0i + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.dHOb7fseK9 apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"finalizers":["delete-psmdb-pvc"],"name":"some-name","namespace":"demand-backup-physical-18487"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:00:34Z" finalizers: - delete-psmdb-pvc generation: 2 name: some-name namespace: demand-backup-physical-18487 resourceVersion: "17180" uid: 4d7d694a-2c95-4365-8bdd-524a1b907203 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:07:35Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:07:35Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:14:29Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:14:29Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:14:30Z" status: "True" type: ready - lastTransitionTime: "2024-05-07T11:14:57Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:15:34Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:15:34Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:16:17Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:16:17Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:16:59Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:16:59Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:22:28Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:22:50Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:23:23Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:23Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:24:00Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:24:00Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:24:32Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:24:32Z" status: "True" type: initializing host: some-name-rs0.demand-backup-physical-18487.svc.cluster.local mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 observedGeneration: 2 ready: 0 replsets: rs0: initialized: true ready: 0 size: 3 status: initializing size: 3 state: initializing + cat /tmp/tmp.Nd59gduC0i + rm /tmp/tmp.dHOb7fseK9 /tmp/tmp.Nd59gduC0i + return 0 ++ kubectl_bin get psmdb some-name -o yaml ++ yq '.metadata.annotations."percona.com/resync-pbm"' +++ mktemp ++ local LAST_OUT=/tmp/tmp.fHnCgcItxM +++ mktemp ++ local LAST_ERR=/tmp/tmp.IjYjStSqZR ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.fHnCgcItxM ++ cat /tmp/tmp.IjYjStSqZR ++ rm /tmp/tmp.fHnCgcItxM /tmp/tmp.IjYjStSqZR ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.4oMKvJvYq4 +++ mktemp ++ local LAST_ERR=/tmp/tmp.g7vtqnIUEk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.4oMKvJvYq4 ++ cat /tmp/tmp.g7vtqnIUEk ++ rm /tmp/tmp.4oMKvJvYq4 /tmp/tmp.g7vtqnIUEk ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.5Hs0y9vcCy +++ mktemp ++ local LAST_ERR=/tmp/tmp.CFS27SnlKe ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.5Hs0y9vcCy ++ cat /tmp/tmp.CFS27SnlKe ++ rm /tmp/tmp.5Hs0y9vcCy /tmp/tmp.CFS27SnlKe ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.S3RVbamEx2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.pB2lGYC9k2 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.S3RVbamEx2 ++ cat /tmp/tmp.pB2lGYC9k2 ++ rm /tmp/tmp.S3RVbamEx2 /tmp/tmp.pB2lGYC9k2 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.JDMiwZwPxo +++ mktemp ++ local LAST_ERR=/tmp/tmp.Cj6alzExUS ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.JDMiwZwPxo ++ cat /tmp/tmp.Cj6alzExUS ++ rm /tmp/tmp.JDMiwZwPxo /tmp/tmp.Cj6alzExUS ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.9t3hCxQChx +++ mktemp ++ local LAST_ERR=/tmp/tmp.7tEouHOMNq ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.9t3hCxQChx ++ cat /tmp/tmp.7tEouHOMNq ++ rm /tmp/tmp.9t3hCxQChx /tmp/tmp.7tEouHOMNq ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.kzCZNnc17B +++ mktemp ++ local LAST_ERR=/tmp/tmp.blIFXwqr9m ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.kzCZNnc17B ++ cat /tmp/tmp.blIFXwqr9m ++ rm /tmp/tmp.kzCZNnc17B /tmp/tmp.blIFXwqr9m ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.MqUtBytoWD +++ mktemp ++ local LAST_ERR=/tmp/tmp.oSfhoLoM0e ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.MqUtBytoWD ++ cat /tmp/tmp.oSfhoLoM0e ++ rm /tmp/tmp.MqUtBytoWD /tmp/tmp.oSfhoLoM0e ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.K5jBjfG3ze +++ mktemp ++ local LAST_ERR=/tmp/tmp.n0iyXcJAYd ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.K5jBjfG3ze ++ cat /tmp/tmp.n0iyXcJAYd ++ rm /tmp/tmp.K5jBjfG3ze /tmp/tmp.n0iyXcJAYd ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.81YAt5y7C1 +++ mktemp ++ local LAST_ERR=/tmp/tmp.SDLqT2Ld9G ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.81YAt5y7C1 ++ cat /tmp/tmp.SDLqT2Ld9G ++ rm /tmp/tmp.81YAt5y7C1 /tmp/tmp.SDLqT2Ld9G ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.BgRBkwDOjA +++ mktemp ++ local LAST_ERR=/tmp/tmp.qoiRdtddQJ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.BgRBkwDOjA ++ cat /tmp/tmp.qoiRdtddQJ ++ rm /tmp/tmp.BgRBkwDOjA /tmp/tmp.qoiRdtddQJ ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.NSmuvQjdsp +++ mktemp ++ local LAST_ERR=/tmp/tmp.z6WIXnzRMI ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.NSmuvQjdsp ++ cat /tmp/tmp.z6WIXnzRMI ++ rm /tmp/tmp.NSmuvQjdsp /tmp/tmp.z6WIXnzRMI ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.o1TmKCf4V7 +++ mktemp ++ local LAST_ERR=/tmp/tmp.ox6FZmsue5 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.o1TmKCf4V7 ++ cat /tmp/tmp.ox6FZmsue5 ++ rm /tmp/tmp.o1TmKCf4V7 /tmp/tmp.ox6FZmsue5 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.5sS1K2H1am +++ mktemp ++ local LAST_ERR=/tmp/tmp.4DoX9BomKs ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.5sS1K2H1am ++ cat /tmp/tmp.4DoX9BomKs ++ rm /tmp/tmp.5sS1K2H1am /tmp/tmp.4DoX9BomKs ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.B3ufz66UA7 +++ mktemp ++ local LAST_ERR=/tmp/tmp.QIBZ68B6ov ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.B3ufz66UA7 ++ cat /tmp/tmp.QIBZ68B6ov ++ rm /tmp/tmp.B3ufz66UA7 /tmp/tmp.QIBZ68B6ov ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.DLndAyB7NN ++ mktemp + local LAST_ERR=/tmp/tmp.iTPmEqylgE + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.DLndAyB7NN + cat /tmp/tmp.iTPmEqylgE + rm /tmp/tmp.DLndAyB7NN /tmp/tmp.iTPmEqylgE + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ga3LCkQCzo +++ mktemp ++ local LAST_ERR=/tmp/tmp.nwYMwpDMBm ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ga3LCkQCzo ++ cat /tmp/tmp.nwYMwpDMBm ++ rm /tmp/tmp.ga3LCkQCzo /tmp/tmp.nwYMwpDMBm ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.3dbo74ygvc ++ mktemp + local LAST_ERR=/tmp/tmp.vZIhupDET3 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.3dbo74ygvc + cat /tmp/tmp.vZIhupDET3 + rm /tmp/tmp.3dbo74ygvc /tmp/tmp.vZIhupDET3 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.a1S6YcwBrG +++ mktemp ++ local LAST_ERR=/tmp/tmp.6cTiI0NwHt ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.a1S6YcwBrG ++ cat /tmp/tmp.6cTiI0NwHt ++ rm /tmp/tmp.a1S6YcwBrG /tmp/tmp.6cTiI0NwHt ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.V1NhWAZW55 ++ mktemp + local LAST_ERR=/tmp/tmp.0WmerEDHYy + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.V1NhWAZW55 + cat /tmp/tmp.0WmerEDHYy + rm /tmp/tmp.V1NhWAZW55 /tmp/tmp.0WmerEDHYy + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + echo + set -o xtrace + echo 'drop collection' drop collection + run_mongo 'use myApp\n db.test.drop()' myApp:myPass@some-name-rs0.demand-backup-physical-18487 + local 'command=use myApp\n db.test.drop()' + local uri=myApp:myPass@some-name-rs0.demand-backup-physical-18487 + local driver=mongodb+srv + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.OHSa06Q43P +++ mktemp ++ local LAST_ERR=/tmp/tmp.bXXVYdj97e ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.OHSa06Q43P ++ cat /tmp/tmp.bXXVYdj97e ++ rm /tmp/tmp.OHSa06Q43P /tmp/tmp.bXXVYdj97e ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb+srv://myApp:myPass@some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.6wKwwrXRjn ++ mktemp + local LAST_ERR=/tmp/tmp.6VsKe7tcsc + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb+srv://myApp:myPass@some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.6wKwwrXRjn Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("6d150613-5e00-4964-b1b7-e2d0e6912290") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye + cat /tmp/tmp.6VsKe7tcsc + rm /tmp/tmp.6wKwwrXRjn /tmp/tmp.6VsKe7tcsc + return 0 + echo 'check backup and restore -- minio' check backup and restore -- minio ++ get_backup_dest backup-minio ++ local backup_name=backup-minio ++ kubectl_bin get psmdb-backup backup-minio -o 'jsonpath={.status.destination}' ++ sed 's|s3://||' ++ sed -e 's/.json$//' +++ mktemp ++ sed 's|azure://||' ++ local LAST_OUT=/tmp/tmp.vhrMxHzcGL +++ mktemp ++ local LAST_ERR=/tmp/tmp.NKkaIYBuL4 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb-backup backup-minio -o 'jsonpath={.status.destination}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.vhrMxHzcGL ++ cat /tmp/tmp.NKkaIYBuL4 ++ rm /tmp/tmp.vhrMxHzcGL /tmp/tmp.NKkaIYBuL4 ++ return 0 + backup_dest_minio=operator-testing/2024-05-07T11:03:40Z + run_restore backup-minio + local backup_name=backup-minio + cat /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/conf/restore.yml + /usr/bin/sed -e 's/name:/name: restore-backup-minio/' + kubectl_bin apply -f - + /usr/bin/sed -e 's/backupName:/backupName: backup-minio/' ++ mktemp + local LAST_OUT=/tmp/tmp.AeC61LeTGY ++ mktemp + local LAST_ERR=/tmp/tmp.uHwjFdHZdP + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl apply -f - + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.AeC61LeTGY perconaservermongodbrestore.psmdb.percona.com/restore-backup-minio created + cat /tmp/tmp.uHwjFdHZdP + rm /tmp/tmp.AeC61LeTGY /tmp/tmp.uHwjFdHZdP + return 0 + run_recovery_check backup-minio + local backup_name=backup-minio + local compare_suffix=_restore + wait_restore backup-minio some-name requested 0 1200 + local backup_name=backup-minio + local cluster_name=some-name + local target_state=requested + local wait_cluster_consistency=0 + local wait_time=1200 + set +o xtrace waiting psmdb-restore/backup-minio to reach requested state......................................................... + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore + local resource=statefulset/some-name-rs0 + local postfix=_restore + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml + local new_result=/tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 ++ mktemp + local LAST_OUT=/tmp/tmp.fXCYPMkloB ++ mktemp + local LAST_ERR=/tmp/tmp.Q8VjWmVM6a + local exit_status=0 + local timeout=4 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-18487", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.fXCYPMkloB + cat /tmp/tmp.Q8VjWmVM6a + rm /tmp/tmp.fXCYPMkloB /tmp/tmp.Q8VjWmVM6a + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore.yml /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + wait_restore backup-minio some-name ready 0 1800 + local backup_name=backup-minio + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-minio to reach ready state.......................................... + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.0s5ltXT5Ug ++ mktemp + local LAST_ERR=/tmp/tmp.DcV09dRio7 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.0s5ltXT5Ug apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"finalizers":["delete-psmdb-pvc"],"name":"some-name","namespace":"demand-backup-physical-18487"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:00:34Z" finalizers: - delete-psmdb-pvc generation: 2 name: some-name namespace: demand-backup-physical-18487 resourceVersion: "20385" uid: 4d7d694a-2c95-4365-8bdd-524a1b907203 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:16:17Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:16:17Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:16:59Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:16:59Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:22:28Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:22:50Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:23:23Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:23Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:24:00Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:24:00Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:24:32Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:24:32Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:30:01Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:30:18Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:30:55Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:30:55Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:31:27Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:31:27Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:32:05Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:32:04Z" status: "True" type: initializing host: some-name-rs0.demand-backup-physical-18487.svc.cluster.local mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 observedGeneration: 2 ready: 1 replsets: rs0: initialized: true ready: 1 size: 3 status: initializing size: 3 state: initializing + cat /tmp/tmp.DcV09dRio7 + rm /tmp/tmp.0s5ltXT5Ug /tmp/tmp.DcV09dRio7 + return 0 ++ kubectl_bin get psmdb some-name -o yaml ++ yq '.metadata.annotations."percona.com/resync-pbm"' +++ mktemp ++ local LAST_OUT=/tmp/tmp.cuIkp5hBlv +++ mktemp ++ local LAST_ERR=/tmp/tmp.r3Y0qygdss ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.cuIkp5hBlv ++ cat /tmp/tmp.r3Y0qygdss ++ rm /tmp/tmp.cuIkp5hBlv /tmp/tmp.r3Y0qygdss ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.058Stqk8XT +++ mktemp ++ local LAST_ERR=/tmp/tmp.hQpIxKCwAG ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.058Stqk8XT ++ cat /tmp/tmp.hQpIxKCwAG ++ rm /tmp/tmp.058Stqk8XT /tmp/tmp.hQpIxKCwAG ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.pqRdNX4aeF +++ mktemp ++ local LAST_ERR=/tmp/tmp.n0EmfwlHcx ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.pqRdNX4aeF ++ cat /tmp/tmp.n0EmfwlHcx ++ rm /tmp/tmp.pqRdNX4aeF /tmp/tmp.n0EmfwlHcx ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.kM6s1DOsMN +++ mktemp ++ local LAST_ERR=/tmp/tmp.JsRRTS1ib9 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.kM6s1DOsMN ++ cat /tmp/tmp.JsRRTS1ib9 ++ rm /tmp/tmp.kM6s1DOsMN /tmp/tmp.JsRRTS1ib9 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.c5FPnxomNV +++ mktemp ++ local LAST_ERR=/tmp/tmp.Z7FdvGGfUw ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.c5FPnxomNV ++ cat /tmp/tmp.Z7FdvGGfUw ++ rm /tmp/tmp.c5FPnxomNV /tmp/tmp.Z7FdvGGfUw ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.HlrS11bYcy +++ mktemp ++ local LAST_ERR=/tmp/tmp.nL7caD6nxk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.HlrS11bYcy ++ cat /tmp/tmp.nL7caD6nxk ++ rm /tmp/tmp.HlrS11bYcy /tmp/tmp.nL7caD6nxk ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.zZMohMrzfo +++ mktemp ++ local LAST_ERR=/tmp/tmp.ADAt2cwBeT ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.zZMohMrzfo ++ cat /tmp/tmp.ADAt2cwBeT ++ rm /tmp/tmp.zZMohMrzfo /tmp/tmp.ADAt2cwBeT ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.yQnEK9JUTq +++ mktemp ++ local LAST_ERR=/tmp/tmp.T1rkLph9WX ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.yQnEK9JUTq ++ cat /tmp/tmp.T1rkLph9WX ++ rm /tmp/tmp.yQnEK9JUTq /tmp/tmp.T1rkLph9WX ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.MCUsPuBgvy +++ mktemp ++ local LAST_ERR=/tmp/tmp.3w9qucbKu2 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.MCUsPuBgvy ++ cat /tmp/tmp.3w9qucbKu2 ++ rm /tmp/tmp.MCUsPuBgvy /tmp/tmp.3w9qucbKu2 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.FrsGcdBGvQ +++ mktemp ++ local LAST_ERR=/tmp/tmp.MUcZSpXs7W ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.FrsGcdBGvQ ++ cat /tmp/tmp.MUcZSpXs7W ++ rm /tmp/tmp.FrsGcdBGvQ /tmp/tmp.MUcZSpXs7W ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.lCD8LqQBer +++ mktemp ++ local LAST_ERR=/tmp/tmp.n0RHN9wL1F ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.lCD8LqQBer ++ cat /tmp/tmp.n0RHN9wL1F ++ rm /tmp/tmp.lCD8LqQBer /tmp/tmp.n0RHN9wL1F ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.dPLjs6NAKH +++ mktemp ++ local LAST_ERR=/tmp/tmp.2VXZM0kH5N ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.dPLjs6NAKH ++ cat /tmp/tmp.2VXZM0kH5N ++ rm /tmp/tmp.dPLjs6NAKH /tmp/tmp.2VXZM0kH5N ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.E1xnPWQaqz +++ mktemp ++ local LAST_ERR=/tmp/tmp.n8GfzyTQcL ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.E1xnPWQaqz ++ cat /tmp/tmp.n8GfzyTQcL ++ rm /tmp/tmp.E1xnPWQaqz /tmp/tmp.n8GfzyTQcL ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.tNHOKOKRd0 +++ mktemp ++ local LAST_ERR=/tmp/tmp.MhrQYgqrry ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.tNHOKOKRd0 ++ cat /tmp/tmp.MhrQYgqrry ++ rm /tmp/tmp.tNHOKOKRd0 /tmp/tmp.MhrQYgqrry ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.rTeKwm49oK +++ mktemp ++ local LAST_ERR=/tmp/tmp.uI2PC9AguB ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.rTeKwm49oK ++ cat /tmp/tmp.uI2PC9AguB ++ rm /tmp/tmp.rTeKwm49oK /tmp/tmp.uI2PC9AguB ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.r5AaL35Upp +++ mktemp ++ local LAST_ERR=/tmp/tmp.kQ6k2xy2xx ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.r5AaL35Upp ++ cat /tmp/tmp.kQ6k2xy2xx ++ rm /tmp/tmp.r5AaL35Upp /tmp/tmp.kQ6k2xy2xx ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.I8MUUS4lqy ++ mktemp + local LAST_ERR=/tmp/tmp.iKqLWhtYqU + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.I8MUUS4lqy + cat /tmp/tmp.iKqLWhtYqU + rm /tmp/tmp.I8MUUS4lqy /tmp/tmp.iKqLWhtYqU + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.MsnZSHLxmA +++ mktemp ++ local LAST_ERR=/tmp/tmp.OQeLbEe4B0 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.MsnZSHLxmA ++ cat /tmp/tmp.OQeLbEe4B0 ++ rm /tmp/tmp.MsnZSHLxmA /tmp/tmp.OQeLbEe4B0 ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.uhIxH1XU0Q ++ mktemp + local LAST_ERR=/tmp/tmp.qFDhfgce4j + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.uhIxH1XU0Q + cat /tmp/tmp.qFDhfgce4j + rm /tmp/tmp.uhIxH1XU0Q /tmp/tmp.qFDhfgce4j + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ local LAST_OUT=/tmp/tmp.jKySRPYoVk +++ mktemp ++ local LAST_ERR=/tmp/tmp.e1otCPg65l ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.jKySRPYoVk ++ cat /tmp/tmp.e1otCPg65l ++ rm /tmp/tmp.jKySRPYoVk /tmp/tmp.e1otCPg65l ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.ucXlqBBdFL ++ mktemp + local LAST_ERR=/tmp/tmp.FOZII9OpXb + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.ucXlqBBdFL + cat /tmp/tmp.FOZII9OpXb + rm /tmp/tmp.ucXlqBBdFL /tmp/tmp.FOZII9OpXb + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + echo + set -o xtrace + desc 'Testing with arbiter and non-voting nodes' + set +o xtrace ----------------------------------------------------------------------------------- Testing with arbiter and non-voting nodes ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name configured check if all pods started waiting for pod/some-name-rs0-0 to be ready.OK waiting for pod/some-name-rs0-1 to be ready.OK waiting for pod/some-name-rs0-arbiter-0 to be ready......OK Waiting for cluster readyness.. waiting for cluster readynessrunning backups perconaservermongodbbackup.psmdb.percona.com/backup-minio-arbiter-nv created backup-minio-arbiter-nv................. drop collection Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-nv-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-arbiter-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017,some-name-rs0-3.some-name-rs0.demand-backup-physical-18487.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("92830099-00ca-4f98-b579-68d3b81f5885") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye check backup and restore -- minio perconaservermongodbrestore.psmdb.percona.com/restore-backup-minio-arbiter-nv created waiting psmdb-restore/backup-minio-arbiter-nv to reach requested state..................................................................... + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore-arbiter-nv + local resource=statefulset/some-name-rs0 + local postfix=_restore-arbiter-nv + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-arbiter-nv.yml + local new_result=/tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-arbiter-nv-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-18487", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ mktemp + local LAST_OUT=/tmp/tmp.orGbHMiEWv ++ mktemp + local LAST_ERR=/tmp/tmp.aoX01AKb9c + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.orGbHMiEWv + cat /tmp/tmp.aoX01AKb9c + rm /tmp/tmp.orGbHMiEWv /tmp/tmp.aoX01AKb9c + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-arbiter-nv.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/statefulset_some-name-rs0_restore-arbiter-nv.yml /tmp/tmp.J3CSzlM5Aj/statefulset_some-name-rs0.yml + wait_restore backup-minio-arbiter-nv some-name ready 0 1800 + local backup_name=backup-minio-arbiter-nv + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-minio-arbiter-nv to reach ready state................................................. + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.Sts3CPEaQM ++ mktemp + local LAST_ERR=/tmp/tmp.oen0YzTlR5 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.Sts3CPEaQM apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"finalizers":["delete-psmdb-pvc"],"name":"some-name","namespace":"demand-backup-physical-18487"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"arbiter":{"affinity":{"antiAffinityTopologyKey":"none"},"enabled":true,"resources":{"limits":{"cpu":"300m","memory":"0.5G"},"requests":{"cpu":"300m","memory":"0.5G"}},"size":1},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","nonvoting":{"affinity":{"antiAffinityTopologyKey":"none"},"enabled":true,"resources":{"limits":{"cpu":"300m","memory":"0.5G"},"requests":{"cpu":"300m","memory":"0.5G"}},"size":1,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"1Gi"}}}}},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":4,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:00:34Z" finalizers: - delete-psmdb-pvc generation: 3 name: some-name namespace: demand-backup-physical-18487 resourceVersion: "24760" uid: 4d7d694a-2c95-4365-8bdd-524a1b907203 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none arbiter: affinity: antiAffinityTopologyKey: none enabled: true resources: limits: cpu: 300m memory: 0.5G requests: cpu: 300m memory: 0.5G size: 1 configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 nonvoting: affinity: antiAffinityTopologyKey: none enabled: true resources: limits: cpu: 300m memory: 0.5G requests: cpu: 300m memory: 0.5G size: 1 volumeSpec: persistentVolumeClaim: resources: requests: storage: 1Gi resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 4 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:23:23Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:23Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:24:00Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:24:00Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:24:32Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:24:32Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:30:01Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:30:18Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:30:55Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:30:55Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:31:27Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:31:27Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:32:05Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:32:04Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:36:05Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:36:26Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:36:49Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:36:49Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:36:54Z" status: "True" type: ready - lastTransitionTime: "2024-05-07T11:37:49Z" status: "True" type: initializing host: some-name-rs0.demand-backup-physical-18487.svc.cluster.local mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 observedGeneration: 3 ready: 1 replsets: rs0: initialized: true ready: 1 size: 6 status: initializing size: 6 state: initializing + cat /tmp/tmp.oen0YzTlR5 + rm /tmp/tmp.Sts3CPEaQM /tmp/tmp.oen0YzTlR5 + return 0 ++ kubectl_bin get psmdb some-name -o yaml ++ yq '.metadata.annotations."percona.com/resync-pbm"' +++ mktemp ++ local LAST_OUT=/tmp/tmp.cH6kQdGPZM +++ mktemp ++ local LAST_ERR=/tmp/tmp.JNWU46KxzL ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.cH6kQdGPZM ++ cat /tmp/tmp.JNWU46KxzL ++ rm /tmp/tmp.cH6kQdGPZM /tmp/tmp.JNWU46KxzL ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.vVSRyfYXFf +++ mktemp ++ local LAST_ERR=/tmp/tmp.m3mb1cdN4p ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.vVSRyfYXFf ++ cat /tmp/tmp.m3mb1cdN4p ++ rm /tmp/tmp.vVSRyfYXFf /tmp/tmp.m3mb1cdN4p ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.YSkIXRReXj +++ mktemp ++ local LAST_ERR=/tmp/tmp.xhuXuPEZuK ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.YSkIXRReXj ++ cat /tmp/tmp.xhuXuPEZuK ++ rm /tmp/tmp.YSkIXRReXj /tmp/tmp.xhuXuPEZuK ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.CiO2h6yBRt +++ mktemp ++ local LAST_ERR=/tmp/tmp.CEBJ57Eyro ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.CiO2h6yBRt ++ cat /tmp/tmp.CEBJ57Eyro ++ rm /tmp/tmp.CiO2h6yBRt /tmp/tmp.CEBJ57Eyro ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.E9kl77o4OU +++ mktemp ++ local LAST_ERR=/tmp/tmp.53ANEoGvmW ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.E9kl77o4OU ++ cat /tmp/tmp.53ANEoGvmW ++ rm /tmp/tmp.E9kl77o4OU /tmp/tmp.53ANEoGvmW ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.IAdRCml1Qe +++ mktemp ++ local LAST_ERR=/tmp/tmp.ux2uwynMBi ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.IAdRCml1Qe ++ cat /tmp/tmp.ux2uwynMBi ++ rm /tmp/tmp.IAdRCml1Qe /tmp/tmp.ux2uwynMBi ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.jvU2ZIMPqS +++ mktemp ++ local LAST_ERR=/tmp/tmp.OxMSO026m9 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.jvU2ZIMPqS ++ cat /tmp/tmp.OxMSO026m9 ++ rm /tmp/tmp.jvU2ZIMPqS /tmp/tmp.OxMSO026m9 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.nLcDBfHU9q +++ mktemp ++ local LAST_ERR=/tmp/tmp.r3KoSavbii ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.nLcDBfHU9q ++ cat /tmp/tmp.r3KoSavbii ++ rm /tmp/tmp.nLcDBfHU9q /tmp/tmp.r3KoSavbii ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Xn6eGNsTWi +++ mktemp ++ local LAST_ERR=/tmp/tmp.u1QaR9D3Cv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Xn6eGNsTWi ++ cat /tmp/tmp.u1QaR9D3Cv ++ rm /tmp/tmp.Xn6eGNsTWi /tmp/tmp.u1QaR9D3Cv ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.OO7BWCQolZ +++ mktemp ++ local LAST_ERR=/tmp/tmp.x2x7lBH8xa ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.OO7BWCQolZ ++ cat /tmp/tmp.x2x7lBH8xa ++ rm /tmp/tmp.OO7BWCQolZ /tmp/tmp.x2x7lBH8xa ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.5wc19ylrZu +++ mktemp ++ local LAST_ERR=/tmp/tmp.2bjpPfJ5U5 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.5wc19ylrZu ++ cat /tmp/tmp.2bjpPfJ5U5 ++ rm /tmp/tmp.5wc19ylrZu /tmp/tmp.2bjpPfJ5U5 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.v3eGsVBMQH +++ mktemp ++ local LAST_ERR=/tmp/tmp.39EKMO3p5x ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.v3eGsVBMQH ++ cat /tmp/tmp.39EKMO3p5x ++ rm /tmp/tmp.v3eGsVBMQH /tmp/tmp.39EKMO3p5x ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Y3BaEpgh3m +++ mktemp ++ local LAST_ERR=/tmp/tmp.9kIAO6mfPd ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Y3BaEpgh3m ++ cat /tmp/tmp.9kIAO6mfPd ++ rm /tmp/tmp.Y3BaEpgh3m /tmp/tmp.9kIAO6mfPd ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.OW6Hc5fzVX +++ mktemp ++ local LAST_ERR=/tmp/tmp.I7S3DN4zUA ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.OW6Hc5fzVX ++ cat /tmp/tmp.I7S3DN4zUA ++ rm /tmp/tmp.OW6Hc5fzVX /tmp/tmp.I7S3DN4zUA ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.njwGe3t6h2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.b8wui9lECv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.njwGe3t6h2 ++ cat /tmp/tmp.b8wui9lECv ++ rm /tmp/tmp.njwGe3t6h2 /tmp/tmp.b8wui9lECv ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 14 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.wWKQ85sDJv +++ mktemp ++ local LAST_ERR=/tmp/tmp.e2oArYtIu5 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.wWKQ85sDJv ++ cat /tmp/tmp.e2oArYtIu5 ++ rm /tmp/tmp.wWKQ85sDJv /tmp/tmp.e2oArYtIu5 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 15 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.K3QNej6Nhw +++ mktemp ++ local LAST_ERR=/tmp/tmp.lSqSEJV8AL ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.K3QNej6Nhw ++ cat /tmp/tmp.lSqSEJV8AL ++ rm /tmp/tmp.K3QNej6Nhw /tmp/tmp.lSqSEJV8AL ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.9wEZennOPY +++ mktemp ++ local LAST_ERR=/tmp/tmp.i1VY0gpmzv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.9wEZennOPY ++ cat /tmp/tmp.i1VY0gpmzv ++ rm /tmp/tmp.9wEZennOPY /tmp/tmp.i1VY0gpmzv ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.IevsusteYR ++ mktemp + local LAST_ERR=/tmp/tmp.0FX2IymmMV + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.IevsusteYR + cat /tmp/tmp.0FX2IymmMV + rm /tmp/tmp.IevsusteYR /tmp/tmp.0FX2IymmMV + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.PNwFM7XlHI +++ mktemp ++ local LAST_ERR=/tmp/tmp.iW348wNInH ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.PNwFM7XlHI ++ cat /tmp/tmp.iW348wNInH ++ rm /tmp/tmp.PNwFM7XlHI /tmp/tmp.iW348wNInH ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.XJDy8ixZfi ++ mktemp + local LAST_ERR=/tmp/tmp.TBBbIf9cyR + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.XJDy8ixZfi + cat /tmp/tmp.TBBbIf9cyR + rm /tmp/tmp.XJDy8ixZfi /tmp/tmp.TBBbIf9cyR + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.zbmPlCYX3q +++ mktemp ++ local LAST_ERR=/tmp/tmp.iSegX5kh5a ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.zbmPlCYX3q ++ cat /tmp/tmp.iSegX5kh5a ++ rm /tmp/tmp.zbmPlCYX3q /tmp/tmp.iSegX5kh5a ++ return 0 + local client_container=psmdb-client-5f578b7f94-bghqv + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.XsCWnibaBz ++ mktemp + local LAST_ERR=/tmp/tmp.V81ij6uG9x + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-bghqv -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.demand-backup-physical-18487.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.XsCWnibaBz + cat /tmp/tmp.V81ij6uG9x + rm /tmp/tmp.XsCWnibaBz /tmp/tmp.V81ij6uG9x + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical/compare/find.json /tmp/tmp.J3CSzlM5Aj/find + echo + set -o xtrace + destroy demand-backup-physical-18487 + local namespace=demand-backup-physical-18487 + local ignore_logs=true + desc 'destroy cluster/operator and all other resources' + set +o xtrace ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io "perconaservermongodbbackups.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbrestores.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbs.psmdb.percona.com" deleted + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n demand-backup-physical-18487 backup-minio --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/backup-minio patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n demand-backup-physical-18487 backup-minio-arbiter-nv --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/backup-minio-arbiter-nv patched customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com condition met error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" clusterrole.rbac.authorization.k8s.io "percona-server-mongodb-operator" deleted clusterrolebinding.rbac.authorization.k8s.io "service-account-percona-server-mongodb-operator" deleted