Log: /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/logs/scheduled-backup.log WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbbackups" error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces psmdb-operator ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace psmdb-operator ----------------------------------------------------------------------------------- namespace/psmdb-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1559-6384b519-2-cluster2" modified. ----------------------------------------------------------------------------------- start PSMDB operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created waiting for pod/percona-server-mongodb-operator-54f884cbc-hd6xk to be ready.OK ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces scheduled-backup-2974 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace scheduled-backup-2974 ----------------------------------------------------------------------------------- namespace/scheduled-backup-2974 created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1559-6384b519-2-cluster2" modified. ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found "minio" has been removed from your repositories "minio" has been added to your repositories NAME: minio-service LAST DEPLOYED: Tue Jun 4 15:27:55 2024 NAMESPACE: scheduled-backup-2974 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.scheduled-backup-2974.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace scheduled-backup-2974 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace scheduled-backup-2974 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace scheduled-backup-2974 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace scheduled-backup-2974 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local waiting for pod/minio-service-57dd49b-kpkq9 to be ready.OK service/minio-service created make_bucket: operator-testing pod "aws-cli" deleted ----------------------------------------------------------------------------------- add labels ----------------------------------------------------------------------------------- node/gke-jen-psmdb-1559-6384b-default-pool-2d95508a-4259 labeled ----------------------------------------------------------------------------------- create PriorityClass ----------------------------------------------------------------------------------- priorityclass.scheduling.k8s.io/high-priority created ----------------------------------------------------------------------------------- create secrets and start client ----------------------------------------------------------------------------------- secret/some-users created deployment.apps/psmdb-client created ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- create first PSMDB cluster some-name-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- waiting for pod/some-name-rs0-0 to be ready...........OK waiting for pod/some-name-rs0-1 to be ready..........OK waiting for pod/some-name-rs0-2 to be ready..............OK Waiting for cluster readyness ----------------------------------------------------------------------------------- check if service and statefulset created with expected config ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create user myApp ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("1c876840-eb02-47b5-837f-edcf38d9d7b1") } Percona Server for MongoDB server version: v7.0.11-6 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye ----------------------------------------------------------------------------------- write data, read from all ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("b64949b3-f621-4385-b4f2-c4008092dcda") } Percona Server for MongoDB server version: v7.0.11-6 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye some-name-rs0-0 some-name-rs0-1 some-name-rs0-2 ----------------------------------------------------------------------------------- add backups schedule, wait for the first backup ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name configured perconaservermongodb.psmdb.percona.com/some-name configured cron-some-name-20240604153100-r6mmt........................................... cron-some-name-20240604153100-zkq7s. cron-some-name-20240604153100-zrkdm. cron-some-name-20240604153100-qwsh2. ----------------------------------------------------------------------------------- check backup and restore -- minio ----------------------------------------------------------------------------------- If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: unable to upgrade connection: container aws-cli not found in pod aws-cli_scheduled-backup-2974 2024-06-04 15:32:29 55 myApp.test.gz Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("e08f3eb6-ddc3-4885-a120-3e7b756ed90c") } Percona Server for MongoDB server version: v7.0.11-6 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240604153100-qwsh2 created waiting psmdb-restore/cron-some-name-20240604153100-qwsh2 to reach ready state............ + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.i3rJgvDd69 +++ mktemp ++ local LAST_ERR=/tmp/tmp.xY7vXSBXJE ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.i3rJgvDd69 ++ cat /tmp/tmp.xY7vXSBXJE ++ rm /tmp/tmp.i3rJgvDd69 /tmp/tmp.xY7vXSBXJE ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.G6dhbbZBqH +++ mktemp ++ local LAST_ERR=/tmp/tmp.j3yU8BV5Lw ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.G6dhbbZBqH ++ cat /tmp/tmp.j3yU8BV5Lw ++ rm /tmp/tmp.G6dhbbZBqH /tmp/tmp.j3yU8BV5Lw ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.uMrwtDsq82 ++ mktemp + local LAST_ERR=/tmp/tmp.DpHnhJoofH + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.uMrwtDsq82 + cat /tmp/tmp.DpHnhJoofH + rm /tmp/tmp.uMrwtDsq82 /tmp/tmp.DpHnhJoofH + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.eGEEjKi2Wy +++ mktemp ++ local LAST_ERR=/tmp/tmp.alTRmpIG9J ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.eGEEjKi2Wy ++ cat /tmp/tmp.alTRmpIG9J ++ rm /tmp/tmp.eGEEjKi2Wy /tmp/tmp.alTRmpIG9J ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.xQNseac0PK ++ mktemp + local LAST_ERR=/tmp/tmp.81A0u6PxFD + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.xQNseac0PK + cat /tmp/tmp.81A0u6PxFD + rm /tmp/tmp.xQNseac0PK /tmp/tmp.81A0u6PxFD + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 mongodb '' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.WMi8TEpehQ +++ mktemp ++ local LAST_ERR=/tmp/tmp.tvgU7dPMze ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.WMi8TEpehQ ++ cat /tmp/tmp.tvgU7dPMze ++ rm /tmp/tmp.WMi8TEpehQ /tmp/tmp.tvgU7dPMze ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.6he4cW2cKp ++ mktemp + local LAST_ERR=/tmp/tmp.AqhBO47OxV + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.6he4cW2cKp + cat /tmp/tmp.AqhBO47OxV + rm /tmp/tmp.6he4cW2cKp /tmp/tmp.AqhBO47OxV + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + '[' -z '' ']' + desc 'check backup and restore -- aws-s3' + set +o xtrace ----------------------------------------------------------------------------------- check backup and restore -- aws-s3 ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("8a565a1b-0aee-43be-a89b-a8a7414fa739") } Percona Server for MongoDB server version: v7.0.11-6 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240604153100-r6mmt created waiting psmdb-restore/cron-some-name-20240604153100-r6mmt to reach ready state............... + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.4ibUfiT37k +++ mktemp ++ local LAST_ERR=/tmp/tmp.z9PnqxqK5R ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.4ibUfiT37k ++ cat /tmp/tmp.z9PnqxqK5R ++ rm /tmp/tmp.4ibUfiT37k /tmp/tmp.z9PnqxqK5R ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ local LAST_OUT=/tmp/tmp.LLlJ94lc8y +++ mktemp ++ local LAST_ERR=/tmp/tmp.1vq135oFkE ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LLlJ94lc8y ++ cat /tmp/tmp.1vq135oFkE ++ rm /tmp/tmp.LLlJ94lc8y /tmp/tmp.1vq135oFkE ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.FbcF0NgXEe ++ mktemp + local LAST_ERR=/tmp/tmp.iaScgbrYtE + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.FbcF0NgXEe + cat /tmp/tmp.iaScgbrYtE + rm /tmp/tmp.FbcF0NgXEe /tmp/tmp.iaScgbrYtE + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.uYzZNyupZR +++ mktemp ++ local LAST_ERR=/tmp/tmp.PDQQhN8U6e ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.uYzZNyupZR ++ cat /tmp/tmp.PDQQhN8U6e ++ rm /tmp/tmp.uYzZNyupZR /tmp/tmp.PDQQhN8U6e ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.dyvHetAOjC ++ mktemp + local LAST_ERR=/tmp/tmp.RPLLq5H2OY + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.dyvHetAOjC + cat /tmp/tmp.RPLLq5H2OY + rm /tmp/tmp.dyvHetAOjC /tmp/tmp.RPLLq5H2OY + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.FNhqH4WYPc +++ mktemp ++ local LAST_ERR=/tmp/tmp.F0cBBNmjDg ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.FNhqH4WYPc ++ cat /tmp/tmp.F0cBBNmjDg ++ rm /tmp/tmp.FNhqH4WYPc /tmp/tmp.F0cBBNmjDg ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.e8Nq6ZTFvm ++ mktemp + local LAST_ERR=/tmp/tmp.0bSVFQQ77J + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.e8Nq6ZTFvm + cat /tmp/tmp.0bSVFQQ77J + rm /tmp/tmp.e8Nq6ZTFvm /tmp/tmp.0bSVFQQ77J + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + desc 'check backup and restore -- gcp-cs' + set +o xtrace ----------------------------------------------------------------------------------- check backup and restore -- gcp-cs ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("45e6c3fb-0354-4010-b50b-36ad77ca24cc") } Percona Server for MongoDB server version: v7.0.11-6 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240604153100-zkq7s created waiting psmdb-restore/cron-some-name-20240604153100-zkq7s to reach ready state............ + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.aSkAWzLzrK +++ mktemp ++ local LAST_ERR=/tmp/tmp.2hhGbay9bW ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.aSkAWzLzrK ++ cat /tmp/tmp.2hhGbay9bW ++ rm /tmp/tmp.aSkAWzLzrK /tmp/tmp.2hhGbay9bW ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Vl4sqE4FCR +++ mktemp ++ local LAST_ERR=/tmp/tmp.1MecPGAGdD ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Vl4sqE4FCR ++ cat /tmp/tmp.1MecPGAGdD ++ rm /tmp/tmp.Vl4sqE4FCR /tmp/tmp.1MecPGAGdD ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.rjLTn8fKaT ++ mktemp + local LAST_ERR=/tmp/tmp.fvmJlPXSgp + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.rjLTn8fKaT + cat /tmp/tmp.fvmJlPXSgp + rm /tmp/tmp.rjLTn8fKaT /tmp/tmp.fvmJlPXSgp + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.LxXnGB6DnH +++ mktemp ++ local LAST_ERR=/tmp/tmp.ZKyJUOAYIi ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LxXnGB6DnH ++ cat /tmp/tmp.ZKyJUOAYIi ++ rm /tmp/tmp.LxXnGB6DnH /tmp/tmp.ZKyJUOAYIi ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.r7fz7iXIsl ++ mktemp + local LAST_ERR=/tmp/tmp.G4slrSASq9 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.r7fz7iXIsl + cat /tmp/tmp.G4slrSASq9 + rm /tmp/tmp.r7fz7iXIsl /tmp/tmp.G4slrSASq9 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ local LAST_OUT=/tmp/tmp.pH8R0QNg6a +++ mktemp ++ local LAST_ERR=/tmp/tmp.gZrmB75vPz ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.pH8R0QNg6a ++ cat /tmp/tmp.gZrmB75vPz ++ rm /tmp/tmp.pH8R0QNg6a /tmp/tmp.gZrmB75vPz ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.DDyMApVenB ++ mktemp + local LAST_ERR=/tmp/tmp.47iw7v2pxl + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.DDyMApVenB + cat /tmp/tmp.47iw7v2pxl + rm /tmp/tmp.DDyMApVenB /tmp/tmp.47iw7v2pxl + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + desc 'check backup and restore -- azure-blob' + set +o xtrace ----------------------------------------------------------------------------------- check backup and restore -- azure-blob ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017,some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("1b9511cf-2378-44db-86ee-fbc3eff5cdbf") } Percona Server for MongoDB server version: v7.0.11-6 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240604153100-zrkdm created waiting psmdb-restore/cron-some-name-20240604153100-zrkdm to reach ready state............. + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.UGHM2tHTYd +++ mktemp ++ local LAST_ERR=/tmp/tmp.evtQIuYLhY ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.UGHM2tHTYd ++ cat /tmp/tmp.evtQIuYLhY ++ rm /tmp/tmp.UGHM2tHTYd /tmp/tmp.evtQIuYLhY ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' +++ mktemp ++ local LAST_OUT=/tmp/tmp.dAWOltMZKP +++ mktemp ++ local LAST_ERR=/tmp/tmp.DGmpO576NH ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.dAWOltMZKP ++ cat /tmp/tmp.DGmpO576NH ++ rm /tmp/tmp.dAWOltMZKP /tmp/tmp.DGmpO576NH ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.7u4lIpWIiz ++ mktemp + local LAST_ERR=/tmp/tmp.ithE35JDpB + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.7u4lIpWIiz + cat /tmp/tmp.ithE35JDpB + rm /tmp/tmp.7u4lIpWIiz /tmp/tmp.ithE35JDpB + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.sJjdIROTHB +++ mktemp ++ local LAST_ERR=/tmp/tmp.d5XBcXcxjD ++ local exit_status=0 ++ local timeout=4 + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sJjdIROTHB ++ cat /tmp/tmp.d5XBcXcxjD ++ rm /tmp/tmp.sJjdIROTHB /tmp/tmp.d5XBcXcxjD ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.Dwm5fMk3TT ++ mktemp + local LAST_ERR=/tmp/tmp.lVi2LYYBO8 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.Dwm5fMk3TT + cat /tmp/tmp.lVi2LYYBO8 + rm /tmp/tmp.Dwm5fMk3TT /tmp/tmp.lVi2LYYBO8 + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.rZMx49VgWi +++ mktemp ++ local LAST_ERR=/tmp/tmp.xwwoKioXrl ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.rZMx49VgWi ++ cat /tmp/tmp.xwwoKioXrl ++ rm /tmp/tmp.rZMx49VgWi /tmp/tmp.xwwoKioXrl ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.RIaTUOxEjV ++ mktemp + local LAST_ERR=/tmp/tmp.nXQBCTWw7j + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.RIaTUOxEjV + cat /tmp/tmp.nXQBCTWw7j + rm /tmp/tmp.RIaTUOxEjV /tmp/tmp.nXQBCTWw7j + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + desc 'add physical backup schedule, wait for the first backup' + set +o xtrace ----------------------------------------------------------------------------------- add physical backup schedule, wait for the first backup ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name configured perconaservermongodb.psmdb.percona.com/some-name configured cron-some-name-20240604153800-lh7g7..... ----------------------------------------------------------------------------------- check backup and restore -- minio ----------------------------------------------------------------------------------- perconaservermongodbrestore.psmdb.percona.com/restore-cron-some-name-20240604153800-lh7g7 created waiting psmdb-restore/cron-some-name-20240604153800-lh7g7 to reach ready state................................................................................................................ + '[' 1 -eq 1 ']' + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.FDMePpGv1s +++ mktemp ++ local LAST_ERR=/tmp/tmp.uILAY4Kx5w ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.FDMePpGv1s ++ cat /tmp/tmp.uILAY4Kx5w ++ rm /tmp/tmp.FDMePpGv1s /tmp/tmp.uILAY4Kx5w ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.LOZRxbWjbo +++ mktemp ++ local LAST_ERR=/tmp/tmp.Co050MGQw7 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LOZRxbWjbo ++ cat /tmp/tmp.Co050MGQw7 ++ rm /tmp/tmp.LOZRxbWjbo /tmp/tmp.Co050MGQw7 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.m0q2jCDYdB +++ mktemp ++ local LAST_ERR=/tmp/tmp.GM44uQbfd2 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.m0q2jCDYdB ++ cat /tmp/tmp.GM44uQbfd2 ++ rm /tmp/tmp.m0q2jCDYdB /tmp/tmp.GM44uQbfd2 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.krtdIBDje2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.NyEgyaNNeM ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.krtdIBDje2 ++ cat /tmp/tmp.NyEgyaNNeM ++ rm /tmp/tmp.krtdIBDje2 /tmp/tmp.NyEgyaNNeM ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Gz7CfpF4Ez +++ mktemp ++ local LAST_ERR=/tmp/tmp.xHZKlZXNYC ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Gz7CfpF4Ez ++ cat /tmp/tmp.xHZKlZXNYC ++ rm /tmp/tmp.Gz7CfpF4Ez /tmp/tmp.xHZKlZXNYC ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.wWvOBmmbuD +++ mktemp ++ local LAST_ERR=/tmp/tmp.dWFzoeLvFJ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.wWvOBmmbuD ++ cat /tmp/tmp.dWFzoeLvFJ ++ rm /tmp/tmp.wWvOBmmbuD /tmp/tmp.dWFzoeLvFJ ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.NXYynpCS1q +++ mktemp ++ local LAST_ERR=/tmp/tmp.V03vFrihuC ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.NXYynpCS1q ++ cat /tmp/tmp.V03vFrihuC ++ rm /tmp/tmp.NXYynpCS1q /tmp/tmp.V03vFrihuC ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Nm77SANooC +++ mktemp ++ local LAST_ERR=/tmp/tmp.EpAavDnaNc ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Nm77SANooC ++ cat /tmp/tmp.EpAavDnaNc ++ rm /tmp/tmp.Nm77SANooC /tmp/tmp.EpAavDnaNc ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.op0JIiTMz1 +++ mktemp ++ local LAST_ERR=/tmp/tmp.Wh1wIDTAqj ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.op0JIiTMz1 ++ cat /tmp/tmp.Wh1wIDTAqj ++ rm /tmp/tmp.op0JIiTMz1 /tmp/tmp.Wh1wIDTAqj ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.KJuKWLnvKn +++ mktemp ++ local LAST_ERR=/tmp/tmp.WBkFBnjNYJ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.KJuKWLnvKn ++ cat /tmp/tmp.WBkFBnjNYJ ++ rm /tmp/tmp.KJuKWLnvKn /tmp/tmp.WBkFBnjNYJ ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ecZEJJMx6k +++ mktemp ++ local LAST_ERR=/tmp/tmp.sXcjRrsXDx ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ecZEJJMx6k ++ cat /tmp/tmp.sXcjRrsXDx ++ rm /tmp/tmp.ecZEJJMx6k /tmp/tmp.sXcjRrsXDx ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.dTKif2CFQ9 +++ mktemp ++ local LAST_ERR=/tmp/tmp.E398WygbyT ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.dTKif2CFQ9 ++ cat /tmp/tmp.E398WygbyT ++ rm /tmp/tmp.dTKif2CFQ9 /tmp/tmp.E398WygbyT ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.hrgATOO9qe +++ mktemp ++ local LAST_ERR=/tmp/tmp.5scNIaulPg ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.hrgATOO9qe ++ cat /tmp/tmp.5scNIaulPg ++ rm /tmp/tmp.hrgATOO9qe /tmp/tmp.5scNIaulPg ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.lNhgzgEFu8 +++ mktemp ++ local LAST_ERR=/tmp/tmp.xML7rUUWZS ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.lNhgzgEFu8 ++ cat /tmp/tmp.xML7rUUWZS ++ rm /tmp/tmp.lNhgzgEFu8 /tmp/tmp.xML7rUUWZS ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongo_cmd find myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.xW1IE6PMcJ +++ mktemp ++ local LAST_ERR=/tmp/tmp.vWvgYjtxlr ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.xW1IE6PMcJ ++ cat /tmp/tmp.vWvgYjtxlr ++ rm /tmp/tmp.xW1IE6PMcJ /tmp/tmp.vWvgYjtxlr ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.npdiUX2PPi ++ mktemp + local LAST_ERR=/tmp/tmp.jRZP3gq1Hz + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-0.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.npdiUX2PPi + cat /tmp/tmp.jRZP3gq1Hz + rm /tmp/tmp.npdiUX2PPi /tmp/tmp.jRZP3gq1Hz + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.9qSFYPYw1L +++ mktemp ++ local LAST_ERR=/tmp/tmp.IPhWU1zd4b ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.9qSFYPYw1L ++ cat /tmp/tmp.IPhWU1zd4b ++ rm /tmp/tmp.9qSFYPYw1L /tmp/tmp.IPhWU1zd4b ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.YSkEzPgoOz ++ mktemp + local LAST_ERR=/tmp/tmp.y7cHCCTP9n + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-1.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.YSkEzPgoOz + cat /tmp/tmp.y7cHCCTP9n + rm /tmp/tmp.YSkEzPgoOz /tmp/tmp.y7cHCCTP9n + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + compare_mongo_cmd find myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local command=find + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local postfix= + local suffix= + local database=myApp + local collection=test + run_mongo 'use myApp\n db.test.find()' myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.xWzxPOdayZ +++ mktemp ++ local LAST_ERR=/tmp/tmp.G7kQQHtdSk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.xWzxPOdayZ ++ cat /tmp/tmp.G7kQQHtdSk ++ rm /tmp/tmp.xWzxPOdayZ /tmp/tmp.G7kQQHtdSk ++ return 0 + local client_container=psmdb-client-7469665986-bt52v + local mongo_flag= + [[ myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974 == *cfg* ]] + replica_set=rs0 + kubectl_bin exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' ++ mktemp + local LAST_OUT=/tmp/tmp.kDRX3Mw5CY ++ mktemp + local LAST_ERR=/tmp/tmp.v8m81306Og + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-7469665986-bt52v -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-rs0-2.some-name-rs0.scheduled-backup-2974.svc.cluster.local/admin?ssl=false\&replicaSet=rs0 ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.kDRX3Mw5CY + cat /tmp/tmp.v8m81306Og + rm /tmp/tmp.kDRX3Mw5CY /tmp/tmp.v8m81306Og + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1559/e2e-tests/scheduled-backup/compare/find.json /tmp/tmp.4JRaSLmRQe/find + sleep 60 + unlabel_node + desc 'remove labels' + set +o xtrace ----------------------------------------------------------------------------------- remove labels ----------------------------------------------------------------------------------- node/gke-jen-psmdb-1559-6384b-default-pool-2d95508a-4259 labeled ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io "perconaservermongodbbackups.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbrestores.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbs.psmdb.percona.com" deleted + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-2974 cron-some-name-20240604153100-qwsh2 --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240604153100-qwsh2 patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-2974 cron-some-name-20240604153100-r6mmt --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240604153100-r6mmt patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-2974 cron-some-name-20240604153100-zrkdm --type=merge -p '{"metadata":{"finalizers":[]}}' E0604 15:46:18.731886 23537 memcache.go:287] couldn't get resource list for psmdb.percona.com/v1-10-0: the server could not find the requested resource E0604 15:46:18.732085 23537 memcache.go:287] couldn't get resource list for psmdb.percona.com/v1-11-0: the server could not find the requested resource E0604 15:46:18.734514 23537 memcache.go:287] couldn't get resource list for psmdb.percona.com/v1-12-0: the server could not find the requested resource perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240604153100-zrkdm patched + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n scheduled-backup-2974 cron-some-name-20240604153800-lh7g7 --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/cron-some-name-20240604153800-lh7g7 patched customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com condition met error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" clusterrole.rbac.authorization.k8s.io "percona-server-mongodb-operator" deleted clusterrolebinding.rbac.authorization.k8s.io "service-account-percona-server-mongodb-operator" deleted ----------------------------------------------------------------------------------- test passed -----------------------------------------------------------------------------------