Log: /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/logs/scheduled-backup.log WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbbackups" error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces psmdb-operator ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace psmdb-operator ----------------------------------------------------------------------------------- namespace/psmdb-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1556-43640f06-2-cluster2" modified. ----------------------------------------------------------------------------------- start PSMDB operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created waiting for pod/percona-server-mongodb-operator-5b96847cc9-d9qvd to be ready.OK ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces scheduled-backup-5530 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace scheduled-backup-5530 ----------------------------------------------------------------------------------- namespace/scheduled-backup-5530 created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1556-43640f06-2-cluster2" modified. ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found "minio" has been removed from your repositories "minio" has been added to your repositories NAME: minio-service LAST DEPLOYED: Fri May 24 15:53:54 2024 NAMESPACE: scheduled-backup-5530 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.scheduled-backup-5530.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace scheduled-backup-5530 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace scheduled-backup-5530 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace scheduled-backup-5530 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace scheduled-backup-5530 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local waiting for pod/minio-service-57dd49b-k7l99 to be ready.OK service/minio-service created make_bucket: operator-testing pod "aws-cli" deleted ----------------------------------------------------------------------------------- add labels ----------------------------------------------------------------------------------- node/gke-jen-psmdb-1556-43640-default-pool-73aea912-2kmw labeled ----------------------------------------------------------------------------------- create PriorityClass ----------------------------------------------------------------------------------- priorityclass.scheduling.k8s.io/high-priority created ----------------------------------------------------------------------------------- create secrets and start client ----------------------------------------------------------------------------------- secret/some-users created deployment.apps/psmdb-client created ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- create first PSMDB cluster some-name-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- waiting for pod/some-name-rs0-0 to be ready........OK waiting for pod/some-name-rs0-1 to be ready.....OK waiting for pod/some-name-rs0-2 to be ready.....OK Waiting for cluster readyness.... ----------------------------------------------------------------------------------- check if service and statefulset created with expected config ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create user myApp ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("43e91762-8ddb-4c61-a29c-d8601cf00949") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye ----------------------------------------------------------------------------------- write data, read from all ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("9efba12e-0c52-44c1-92c3-88380c46678a") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye some-name-rs0-0 some-name-rs0-1 some-name-rs0-2 ----------------------------------------------------------------------------------- add backups schedule, wait for the first backup ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name configured perconaservermongodb.psmdb.percona.com/some-name configured cron-some-name-20240524155700-dqlx2................................... cron-some-name-20240524155700-8zg5c. cron-some-name-20240524155700-426vl. cron-some-name-20240524155700-bdbqk. ----------------------------------------------------------------------------------- check backup and restore -- minio ----------------------------------------------------------------------------------- 2024-05-24 15:57:26 55 myApp.test.gz Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("abbdbbac-0c5d-42cb-b98b-6997ab4de6d9") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240524155700-bdbqk created waiting psmdb-restore/cron-some-name-20240524155700-bdbqk to reach ready state.......... + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.sBEZun9IJL +++ mktemp ++ local LAST_ERR=/tmp/tmp.amjUvOgQNd ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sBEZun9IJL ++ cat /tmp/tmp.amjUvOgQNd ++ rm /tmp/tmp.sBEZun9IJL /tmp/tmp.amjUvOgQNd ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.xPBKNHHpvL +++ mktemp ++ local LAST_ERR=/tmp/tmp.m3BmshJvTj ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.xPBKNHHpvL ++ cat /tmp/tmp.m3BmshJvTj ++ rm /tmp/tmp.xPBKNHHpvL /tmp/tmp.m3BmshJvTj ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.2sSfD4htIx ++ mktemp + local LAST_ERR=/tmp/tmp.YryPCbO5FQ + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.2sSfD4htIx + cat /tmp/tmp.YryPCbO5FQ + rm /tmp/tmp.2sSfD4htIx /tmp/tmp.YryPCbO5FQ + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' +++ mktemp ++ local LAST_OUT=/tmp/tmp.oB57uZJjkP +++ mktemp ++ local LAST_ERR=/tmp/tmp.CmIwW2r5pB ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.oB57uZJjkP ++ cat /tmp/tmp.CmIwW2r5pB ++ rm /tmp/tmp.oB57uZJjkP /tmp/tmp.CmIwW2r5pB ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.9AB06cF404 ++ mktemp + local LAST_ERR=/tmp/tmp.54bwaGeaMy + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.9AB06cF404 + cat /tmp/tmp.54bwaGeaMy + rm /tmp/tmp.9AB06cF404 /tmp/tmp.54bwaGeaMy + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.7aHVwtjKkn +++ mktemp ++ local LAST_ERR=/tmp/tmp.NcScR9q46m ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.7aHVwtjKkn ++ cat /tmp/tmp.NcScR9q46m ++ rm /tmp/tmp.7aHVwtjKkn /tmp/tmp.NcScR9q46m ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.0JGXjZvzEK ++ mktemp + local LAST_ERR=/tmp/tmp.BpLyhE0VHa + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.0JGXjZvzEK + cat /tmp/tmp.BpLyhE0VHa + rm /tmp/tmp.0JGXjZvzEK /tmp/tmp.BpLyhE0VHa + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + '[' -z '' ']' + desc 'check backup and restore -- aws-s3' + set +o xtrace ----------------------------------------------------------------------------------- check backup and restore -- aws-s3 ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("4fc00a81-b5cb-4ddf-9de8-eef68ccb6101") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240524155700-dqlx2 created waiting psmdb-restore/cron-some-name-20240524155700-dqlx2 to reach ready state............. + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.q62Q76sBWL +++ mktemp ++ local LAST_ERR=/tmp/tmp.C6Zpo8VHIi ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.q62Q76sBWL ++ cat /tmp/tmp.C6Zpo8VHIi ++ rm /tmp/tmp.q62Q76sBWL /tmp/tmp.C6Zpo8VHIi ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.LLqcHUIf0a +++ mktemp + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ local LAST_ERR=/tmp/tmp.dhuLx8pPjI ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LLqcHUIf0a ++ cat /tmp/tmp.dhuLx8pPjI ++ rm /tmp/tmp.LLqcHUIf0a /tmp/tmp.dhuLx8pPjI ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.QRCjs42BHi ++ mktemp + local LAST_ERR=/tmp/tmp.ob5a0MtsTX + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.QRCjs42BHi + cat /tmp/tmp.ob5a0MtsTX + rm /tmp/tmp.QRCjs42BHi /tmp/tmp.ob5a0MtsTX + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.sPndwnI2PF +++ mktemp ++ local LAST_ERR=/tmp/tmp.D7HjboTOID ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sPndwnI2PF ++ cat /tmp/tmp.D7HjboTOID ++ rm /tmp/tmp.sPndwnI2PF /tmp/tmp.D7HjboTOID ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.jskT6UrWch ++ mktemp + local LAST_ERR=/tmp/tmp.hNHsWIFH18 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.jskT6UrWch + cat /tmp/tmp.hNHsWIFH18 + rm /tmp/tmp.jskT6UrWch /tmp/tmp.hNHsWIFH18 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ local LAST_OUT=/tmp/tmp.t7gh7nnocw +++ mktemp ++ local LAST_ERR=/tmp/tmp.TXhxTFGZKv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.t7gh7nnocw ++ cat /tmp/tmp.TXhxTFGZKv ++ rm /tmp/tmp.t7gh7nnocw /tmp/tmp.TXhxTFGZKv ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.vHyy47Y9sy ++ mktemp + local LAST_ERR=/tmp/tmp.Ly9b02bxLB + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.vHyy47Y9sy + cat /tmp/tmp.Ly9b02bxLB + rm /tmp/tmp.vHyy47Y9sy /tmp/tmp.Ly9b02bxLB + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + desc 'check backup and restore -- gcp-cs' + set +o xtrace ----------------------------------------------------------------------------------- check backup and restore -- gcp-cs ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("f9f5801e-9b82-48da-bf8a-49b4e01e54d8") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240524155700-8zg5c created waiting psmdb-restore/cron-some-name-20240524155700-8zg5c to reach ready state............ + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.kjr0fAawa2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.nrDZbYaX9A ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.kjr0fAawa2 ++ cat /tmp/tmp.nrDZbYaX9A ++ rm /tmp/tmp.kjr0fAawa2 /tmp/tmp.nrDZbYaX9A ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.8ilo3vcAsa +++ mktemp ++ local LAST_ERR=/tmp/tmp.TTyUuQ3EsZ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.8ilo3vcAsa ++ cat /tmp/tmp.TTyUuQ3EsZ ++ rm /tmp/tmp.8ilo3vcAsa /tmp/tmp.TTyUuQ3EsZ ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.CINQcjikcm ++ mktemp + local LAST_ERR=/tmp/tmp.rz9f6dAn6v + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.CINQcjikcm + cat /tmp/tmp.rz9f6dAn6v + rm /tmp/tmp.CINQcjikcm /tmp/tmp.rz9f6dAn6v + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ local LAST_OUT=/tmp/tmp.HEVno0bYBP +++ mktemp ++ local LAST_ERR=/tmp/tmp.YbY46iN5eX ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.HEVno0bYBP ++ cat /tmp/tmp.YbY46iN5eX ++ rm /tmp/tmp.HEVno0bYBP /tmp/tmp.YbY46iN5eX ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.ELmAQf12zl ++ mktemp + local LAST_ERR=/tmp/tmp.oAgTpkBe1Y + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.ELmAQf12zl + cat /tmp/tmp.oAgTpkBe1Y + rm /tmp/tmp.ELmAQf12zl /tmp/tmp.oAgTpkBe1Y + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.3faDYfr6PL +++ mktemp ++ local LAST_ERR=/tmp/tmp.KjJUBPwxsA ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.3faDYfr6PL ++ cat /tmp/tmp.KjJUBPwxsA ++ rm /tmp/tmp.3faDYfr6PL /tmp/tmp.KjJUBPwxsA ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.0k20ff8wBp ++ mktemp + local LAST_ERR=/tmp/tmp.J2q9IEymfB + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.0k20ff8wBp + cat /tmp/tmp.J2q9IEymfB + rm /tmp/tmp.0k20ff8wBp /tmp/tmp.J2q9IEymfB + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + desc 'check backup and restore -- azure-blob' + set +o xtrace ----------------------------------------------------------------------------------- check backup and restore -- azure-blob ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("ee1ed7cc-a827-469f-816d-c4db81509ebf") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240524155700-426vl created waiting psmdb-restore/cron-some-name-20240524155700-426vl to reach ready state............ + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.FcY2LoKgCk +++ mktemp ++ local LAST_ERR=/tmp/tmp.0ZInsuNmUX ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.FcY2LoKgCk ++ cat /tmp/tmp.0ZInsuNmUX ++ rm /tmp/tmp.FcY2LoKgCk /tmp/tmp.0ZInsuNmUX ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ local LAST_OUT=/tmp/tmp.sKyMTCMmTo +++ mktemp ++ local LAST_ERR=/tmp/tmp.Qlq9V8uXOJ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sKyMTCMmTo ++ cat /tmp/tmp.Qlq9V8uXOJ ++ rm /tmp/tmp.sKyMTCMmTo /tmp/tmp.Qlq9V8uXOJ ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.CHbNnZh9If ++ mktemp + local LAST_ERR=/tmp/tmp.KBcR7U5YCN + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.CHbNnZh9If + cat /tmp/tmp.KBcR7U5YCN + rm /tmp/tmp.CHbNnZh9If /tmp/tmp.KBcR7U5YCN + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.jH59pccjyg +++ mktemp ++ local LAST_ERR=/tmp/tmp.crOmc1wnQ2 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.jH59pccjyg ++ cat /tmp/tmp.crOmc1wnQ2 ++ rm /tmp/tmp.jH59pccjyg /tmp/tmp.crOmc1wnQ2 ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.8DboZa0a0o ++ mktemp + local LAST_ERR=/tmp/tmp.8ooHDdKoiy + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.8DboZa0a0o + cat /tmp/tmp.8ooHDdKoiy + rm /tmp/tmp.8DboZa0a0o /tmp/tmp.8ooHDdKoiy + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.cjvFZgQyCD +++ mktemp ++ local LAST_ERR=/tmp/tmp.mKEFjGGDfW ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.cjvFZgQyCD ++ cat /tmp/tmp.mKEFjGGDfW ++ rm /tmp/tmp.cjvFZgQyCD /tmp/tmp.mKEFjGGDfW ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.SCpPRBSLsE ++ mktemp + local LAST_ERR=/tmp/tmp.Ind0bnPEe5 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.SCpPRBSLsE + cat /tmp/tmp.Ind0bnPEe5 + rm /tmp/tmp.SCpPRBSLsE /tmp/tmp.Ind0bnPEe5 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + desc 'add physical backup schedule, wait for the first backup' + set +o xtrace ----------------------------------------------------------------------------------- add physical backup schedule, wait for the first backup ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name configured perconaservermongodb.psmdb.percona.com/some-name configured cron-some-name-20240524160400-dxxcs. ----------------------------------------------------------------------------------- check backup and restore -- minio ----------------------------------------------------------------------------------- perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240524160400-dxxcs created waiting psmdb-restore/cron-some-name-20240524160400-dxxcs to reach ready state................................................................................................. + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.0hHBK3fG2n +++ mktemp ++ local LAST_ERR=/tmp/tmp.5obKVQNnI2 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.0hHBK3fG2n ++ cat /tmp/tmp.5obKVQNnI2 ++ rm /tmp/tmp.0hHBK3fG2n /tmp/tmp.5obKVQNnI2 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.djnXpKbYzM +++ mktemp ++ local LAST_ERR=/tmp/tmp.JFDxdhAyNP ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.djnXpKbYzM ++ cat /tmp/tmp.JFDxdhAyNP ++ rm /tmp/tmp.djnXpKbYzM /tmp/tmp.JFDxdhAyNP ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Gt17I75RnU +++ mktemp ++ local LAST_ERR=/tmp/tmp.AcTXmyUKj3 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Gt17I75RnU ++ cat /tmp/tmp.AcTXmyUKj3 ++ rm /tmp/tmp.Gt17I75RnU /tmp/tmp.AcTXmyUKj3 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.hgieGkLc5n +++ mktemp ++ local LAST_ERR=/tmp/tmp.a7xRXFsijp ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.hgieGkLc5n ++ cat /tmp/tmp.a7xRXFsijp ++ rm /tmp/tmp.hgieGkLc5n /tmp/tmp.a7xRXFsijp ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.yWTmOkNoHk +++ mktemp ++ local LAST_ERR=/tmp/tmp.8ofQo8I3zs ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.yWTmOkNoHk ++ cat /tmp/tmp.8ofQo8I3zs ++ rm /tmp/tmp.yWTmOkNoHk /tmp/tmp.8ofQo8I3zs ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.O0Fh2zPmbW +++ mktemp ++ local LAST_ERR=/tmp/tmp.EjTiA4SGfO ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.O0Fh2zPmbW ++ cat /tmp/tmp.EjTiA4SGfO ++ rm /tmp/tmp.O0Fh2zPmbW /tmp/tmp.EjTiA4SGfO ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Bjb3E7UVq8 +++ mktemp ++ local LAST_ERR=/tmp/tmp.7agwVgKh6o ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Bjb3E7UVq8 ++ cat /tmp/tmp.7agwVgKh6o ++ rm /tmp/tmp.Bjb3E7UVq8 /tmp/tmp.7agwVgKh6o ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.g9CQTs5r7i +++ mktemp ++ local LAST_ERR=/tmp/tmp.NAFPQwxiRr ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.g9CQTs5r7i ++ cat /tmp/tmp.NAFPQwxiRr ++ rm /tmp/tmp.g9CQTs5r7i /tmp/tmp.NAFPQwxiRr ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Xe8ks1jRWz +++ mktemp ++ local LAST_ERR=/tmp/tmp.hqEQ9Jze8C ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Xe8ks1jRWz ++ cat /tmp/tmp.hqEQ9Jze8C ++ rm /tmp/tmp.Xe8ks1jRWz /tmp/tmp.hqEQ9Jze8C ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.qNf1Nl2bR9 +++ mktemp ++ local LAST_ERR=/tmp/tmp.zHMZ5jMpMi ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.qNf1Nl2bR9 ++ cat /tmp/tmp.zHMZ5jMpMi ++ rm /tmp/tmp.qNf1Nl2bR9 /tmp/tmp.zHMZ5jMpMi ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.jahfnmpSyN +++ mktemp ++ local LAST_ERR=/tmp/tmp.K2tn3rWN05 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.jahfnmpSyN ++ cat /tmp/tmp.K2tn3rWN05 ++ rm /tmp/tmp.jahfnmpSyN /tmp/tmp.K2tn3rWN05 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.e3dGguUyIY +++ mktemp ++ local LAST_ERR=/tmp/tmp.R9pCHZ8VDx ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.e3dGguUyIY ++ cat /tmp/tmp.R9pCHZ8VDx ++ rm /tmp/tmp.e3dGguUyIY /tmp/tmp.R9pCHZ8VDx ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ku1WDtrMnS +++ mktemp ++ local LAST_ERR=/tmp/tmp.24K3U8vXYI ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ku1WDtrMnS ++ cat /tmp/tmp.24K3U8vXYI ++ rm /tmp/tmp.ku1WDtrMnS /tmp/tmp.24K3U8vXYI ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.rVSE534M5c +++ mktemp ++ local LAST_ERR=/tmp/tmp.VmqXeK3J8E ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.rVSE534M5c ++ cat /tmp/tmp.VmqXeK3J8E ++ rm /tmp/tmp.rVSE534M5c /tmp/tmp.VmqXeK3J8E ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.keWFFB7uYU +++ mktemp ++ local LAST_ERR=/tmp/tmp.2mafOgrmN4 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.keWFFB7uYU ++ cat /tmp/tmp.2mafOgrmN4 ++ rm /tmp/tmp.keWFFB7uYU /tmp/tmp.2mafOgrmN4 ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.9NnsY4BCpg ++ mktemp + local LAST_ERR=/tmp/tmp.6SpGc3cAOv + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.9NnsY4BCpg + cat /tmp/tmp.6SpGc3cAOv + rm /tmp/tmp.9NnsY4BCpg /tmp/tmp.6SpGc3cAOv + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ncgfvA9VAT +++ mktemp ++ local LAST_ERR=/tmp/tmp.I1rqvqOt6c ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ncgfvA9VAT ++ cat /tmp/tmp.I1rqvqOt6c ++ rm /tmp/tmp.ncgfvA9VAT /tmp/tmp.I1rqvqOt6c ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.6uliQL9Hoc ++ mktemp + local LAST_ERR=/tmp/tmp.zu4sxHuU5O + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.6uliQL9Hoc + cat /tmp/tmp.zu4sxHuU5O + rm /tmp/tmp.6uliQL9Hoc /tmp/tmp.zu4sxHuU5O + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.KnlFitxyeh +++ mktemp ++ local LAST_ERR=/tmp/tmp.8eWpP4Iu6R ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.KnlFitxyeh ++ cat /tmp/tmp.8eWpP4Iu6R ++ rm /tmp/tmp.KnlFitxyeh /tmp/tmp.8eWpP4Iu6R ++ return 0 + local client_container=psmdb-client-7469665986-6mn97 + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.SjKdQkPNyL ++ mktemp + local LAST_ERR=/tmp/tmp.iAh2rVNDY0 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-6mn97 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-5530.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.SjKdQkPNyL + cat /tmp/tmp.iAh2rVNDY0 + rm /tmp/tmp.SjKdQkPNyL /tmp/tmp.iAh2rVNDY0 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1556/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.Op1J3IwZwe/find + sleep 60 + unlabel_node + desc 'remove labels' + set +o xtrace ----------------------------------------------------------------------------------- remove labels ----------------------------------------------------------------------------------- node/gke-jen-psmdb-1556-43640-default-pool-73aea912-2kmw labeled ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io "perconaservermongodbbackups.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbrestores.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbs.psmdb.percona.com" deleted + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-5530 cron-some-name-20240524155700-426vl --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240524155700-426vl patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-5530 cron-some-name-20240524155700-8zg5c --type=merge -p '{"metadata":{"finalizers":[]}}' E0524 16:12:13.108905 14623 memcache.go:287] couldn't get resource list for psmdb.percona.com/v1-11-0: the server could not find the requested resource E0524 16:12:13.109435 14623 memcache.go:287] couldn't get resource list for psmdb.percona.com/v1-12-0: the server could not find the requested resource E0524 16:12:13.110145 14623 memcache.go:287] couldn't get resource list for psmdb.percona.com/v1-10-0: the server could not find the requested resource perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240524155700-8zg5c patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-5530 cron-some-name-20240524155700-bdbqk --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240524155700-bdbqk patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-5530 cron-some-name-20240524160400-dxxcs --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240524160400-dxxcs patched customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com condition met error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" clusterrole.rbac.authorization.k8s.io "percona-server-mongodb-operator" deleted clusterrolebinding.rbac.authorization.k8s.io "service-account-percona-server-mongodb-operator" deleted ----------------------------------------------------------------------------------- test passed -----------------------------------------------------------------------------------