Log: /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/logs/demand-backup-physical-sharded.log E0507 10:58:33.371584 2956 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:33.488407 2956 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:33.600630 2956 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:33.713049 2956 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- E0507 10:58:38.098723 3544 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:38.328847 3544 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:40.859127 3906 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:41.092991 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:41.205251 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:41.317721 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:41.659645 3906 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:41.898807 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:42.014679 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:42.129297 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:42.241731 3906 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E0507 10:58:44.437790 4427 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:44.602278 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:44.755818 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:44.872534 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.219648 4427 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.453204 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.575193 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.686342 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:45.797819 4427 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbbackups" E0507 10:58:47.709539 4934 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:48.045293 4934 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:48.158993 4934 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:48.272201 4934 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:49.758933 5340 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.088365 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.200677 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.313326 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.664234 5340 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:50.883599 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:51.000367 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:51.116274 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0507 10:58:51.228789 5340 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces psmdb-operator ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace psmdb-operator ----------------------------------------------------------------------------------- namespace/psmdb-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1543-c8a64a78-9-cluster8" modified. ----------------------------------------------------------------------------------- start PSMDB operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created waiting for pod/percona-server-mongodb-operator-fdfdd955d-chvd5 to be ready..OK ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces demand-backup-physical-sharded-23638 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace demand-backup-physical-sharded-23638 ----------------------------------------------------------------------------------- namespace/demand-backup-physical-sharded-23638 created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-1543-c8a64a78-9-cluster8" modified. ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found "minio" has been removed from your repositories "minio" has been added to your repositories NAME: minio-service LAST DEPLOYED: Tue May 7 11:00:16 2024 NAMESPACE: demand-backup-physical-sharded-23638 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.demand-backup-physical-sharded-23638.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace demand-backup-physical-sharded-23638 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace demand-backup-physical-sharded-23638 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace demand-backup-physical-sharded-23638 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace demand-backup-physical-sharded-23638 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local waiting for pod/minio-service-57dd49b-pkk4h to be ready.OK service/minio-service created make_bucket: operator-testing pod "aws-cli" deleted If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: Internal error occurred: error attaching to container: container is in CONTAINER_EXITED state ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- Testing on sharded cluster ----------------------------------------------------------------------------------- Creating PSMDB cluster secret/some-users created perconaservermongodb.psmdb.percona.com/some-name created deployment.apps/psmdb-client created check if all pods started waiting for pod/some-name-rs0-0 to be ready.................OK waiting for pod/some-name-rs0-1 to be ready...............OK waiting for pod/some-name-rs0-2 to be ready.......OK Waiting for cluster readyness............................. waiting for pod/some-name-cfg-0 to be ready.OK waiting for pod/some-name-cfg-1 to be ready.OK waiting for pod/some-name-cfg-2 to be ready.OK Waiting for cluster readyness waiting for pod/some-name-mongos-0 to be ready.OK waiting for pod/some-name-mongos-1 to be ready.OK waiting for pod/some-name-mongos-2 to be ready.OK Waiting for cluster readyness waiting for cluster readynessPercona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("aedbe640-2308-43ac-844a-98815f05f95d") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("adcf69ca-ba0d-41cb-bf98-3518f9d75870") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye running backups perconaservermongodbbackup.psmdb.percona.com/backup-minio-sharded created perconaservermongodbbackup.psmdb.percona.com/backup-aws-s3-sharded created perconaservermongodbbackup.psmdb.percona.com/backup-gcp-cs-sharded created perconaservermongodbbackup.psmdb.percona.com/backup-azure-blob-sharded created backup-aws-s3-sharded.................................... backup-gcp-cs-sharded......................... backup-azure-blob-sharded........................ backup-minio-sharded. drop collection Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("f14a954e-d215-40af-8229-f20bba0c50b3") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye check backup and restore -- aws-s3 perconaservermongodbrestore.psmdb.percona.com/restore-backup-aws-s3-sharded created waiting psmdb-restore/backup-aws-s3-sharded to reach requested state......................................................................................................................................................................................................................... + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore_sharded + local resource=statefulset/some-name-rs0 + local postfix=_restore_sharded + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml + local new_result=/tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-sharded-23638", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ mktemp + local LAST_OUT=/tmp/tmp.Z8qPUDWqaZ ++ mktemp + local LAST_ERR=/tmp/tmp.oOPkVtqWAN + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.Z8qPUDWqaZ + cat /tmp/tmp.oOPkVtqWAN + rm /tmp/tmp.Z8qPUDWqaZ /tmp/tmp.oOPkVtqWAN + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + wait_restore backup-aws-s3-sharded some-name ready 0 1800 + local backup_name=backup-aws-s3-sharded + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-aws-s3-sharded to reach ready state........................................................................... + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.Dxm0aAGSvZ ++ mktemp + local LAST_ERR=/tmp/tmp.Pqd7X8lr3U + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.Dxm0aAGSvZ apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"name":"some-name","namespace":"demand-backup-physical-sharded-23638"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical-sharded"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"sharding":{"configsvrReplSet":{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}},"enabled":true,"mongos":{"affinity":{"antiAffinityTopologyKey":"none"},"expose":{"exposeType":"LoadBalancer"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3}},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:01:06Z" generation: 2 name: some-name namespace: demand-backup-physical-sharded-23638 resourceVersion: "12106" uid: bb928bff-3e1d-44d8-b03a-cc91e5e421e9 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical-sharded type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users sharding: configsvrReplSet: affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi enabled: true mongos: affinity: antiAffinityTopologyKey: none expose: exposeType: LoadBalancer resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:01:09Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:03:10Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:03:10Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:03:51Z" reason: MongosReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:03:51Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:04:02Z" status: "True" type: ready - lastTransitionTime: "2024-05-07T11:08:08Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:08:42Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:08:42Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:08:56Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:08:56Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:09:22Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:09:22Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:09:36Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:09:36Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:09:56Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:09:56Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:10:09Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:10:09Z" status: "True" type: initializing host: 34.170.230.148 mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 mongos: ready: 0 size: 0 status: initializing observedGeneration: 2 ready: 0 replsets: cfg: initialized: true ready: 0 size: 3 status: initializing rs0: added_as_shard: true initialized: true ready: 0 size: 3 status: initializing size: 6 state: initializing + cat /tmp/tmp.Pqd7X8lr3U + rm /tmp/tmp.Dxm0aAGSvZ /tmp/tmp.Pqd7X8lr3U + return 0 ++ yq '.metadata.annotations."percona.com/resync-pbm"' ++ kubectl_bin get psmdb some-name -o yaml +++ mktemp ++ local LAST_OUT=/tmp/tmp.EEePuu4tyh +++ mktemp ++ local LAST_ERR=/tmp/tmp.jDFr3akvqF ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.EEePuu4tyh ++ cat /tmp/tmp.jDFr3akvqF ++ rm /tmp/tmp.EEePuu4tyh /tmp/tmp.jDFr3akvqF ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.LCoA4XhFL0 +++ mktemp ++ local LAST_ERR=/tmp/tmp.JnbfEhcYQe ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.LCoA4XhFL0 ++ cat /tmp/tmp.JnbfEhcYQe ++ rm /tmp/tmp.LCoA4XhFL0 /tmp/tmp.JnbfEhcYQe ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.OKqO5s7ylG +++ mktemp ++ local LAST_ERR=/tmp/tmp.dA7MYn0PnV ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.OKqO5s7ylG ++ cat /tmp/tmp.dA7MYn0PnV ++ rm /tmp/tmp.OKqO5s7ylG /tmp/tmp.dA7MYn0PnV ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.M4SqLCbBzX +++ mktemp ++ local LAST_ERR=/tmp/tmp.5edh0mmUIt ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.M4SqLCbBzX ++ cat /tmp/tmp.5edh0mmUIt ++ rm /tmp/tmp.M4SqLCbBzX /tmp/tmp.5edh0mmUIt ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Cnk1EDSstq +++ mktemp ++ local LAST_ERR=/tmp/tmp.QfWZUEwHne ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Cnk1EDSstq ++ cat /tmp/tmp.QfWZUEwHne ++ rm /tmp/tmp.Cnk1EDSstq /tmp/tmp.QfWZUEwHne ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.4dETryCs3g +++ mktemp ++ local LAST_ERR=/tmp/tmp.UHGOYZMKBN ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.4dETryCs3g ++ cat /tmp/tmp.UHGOYZMKBN ++ rm /tmp/tmp.4dETryCs3g /tmp/tmp.UHGOYZMKBN ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.8Su3DThaMi +++ mktemp ++ local LAST_ERR=/tmp/tmp.pA3n3xuoM7 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.8Su3DThaMi ++ cat /tmp/tmp.pA3n3xuoM7 ++ rm /tmp/tmp.8Su3DThaMi /tmp/tmp.pA3n3xuoM7 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.i628GrXMrj +++ mktemp ++ local LAST_ERR=/tmp/tmp.i4rRhih0dz ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.i628GrXMrj ++ cat /tmp/tmp.i4rRhih0dz ++ rm /tmp/tmp.i628GrXMrj /tmp/tmp.i4rRhih0dz ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.SHEpe3n6vG +++ mktemp ++ local LAST_ERR=/tmp/tmp.xJ2gbglhky ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.SHEpe3n6vG ++ cat /tmp/tmp.xJ2gbglhky ++ rm /tmp/tmp.SHEpe3n6vG /tmp/tmp.xJ2gbglhky ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.hVnf5NMXj5 +++ mktemp ++ local LAST_ERR=/tmp/tmp.uB82bzduQm ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.hVnf5NMXj5 ++ cat /tmp/tmp.uB82bzduQm ++ rm /tmp/tmp.hVnf5NMXj5 /tmp/tmp.uB82bzduQm ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.2k91lJ3JSx +++ mktemp ++ local LAST_ERR=/tmp/tmp.giZrzRPjg4 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.2k91lJ3JSx ++ cat /tmp/tmp.giZrzRPjg4 ++ rm /tmp/tmp.2k91lJ3JSx /tmp/tmp.giZrzRPjg4 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.W87s2dyZbX +++ mktemp ++ local LAST_ERR=/tmp/tmp.lxc6nZcrYl ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.W87s2dyZbX ++ cat /tmp/tmp.lxc6nZcrYl ++ rm /tmp/tmp.W87s2dyZbX /tmp/tmp.lxc6nZcrYl ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.zLZtG9aCZu +++ mktemp ++ local LAST_ERR=/tmp/tmp.RFYNeK55nN ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.zLZtG9aCZu ++ cat /tmp/tmp.RFYNeK55nN ++ rm /tmp/tmp.zLZtG9aCZu /tmp/tmp.RFYNeK55nN ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.1wAUPDJmjL +++ mktemp ++ local LAST_ERR=/tmp/tmp.19dg0Ne84j ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.1wAUPDJmjL ++ cat /tmp/tmp.19dg0Ne84j ++ rm /tmp/tmp.1wAUPDJmjL /tmp/tmp.19dg0Ne84j ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.vFSGoQZr8s +++ mktemp ++ local LAST_ERR=/tmp/tmp.v3WIl8ZXmm ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.vFSGoQZr8s ++ cat /tmp/tmp.v3WIl8ZXmm ++ rm /tmp/tmp.vFSGoQZr8s /tmp/tmp.v3WIl8ZXmm ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 14 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.zitmF1X4uI +++ mktemp ++ local LAST_ERR=/tmp/tmp.bnfaIFlnTN ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.zitmF1X4uI ++ cat /tmp/tmp.bnfaIFlnTN ++ rm /tmp/tmp.zitmF1X4uI /tmp/tmp.bnfaIFlnTN ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 15 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.vXqzYQ7Jpt +++ mktemp ++ local LAST_ERR=/tmp/tmp.LR4cE3qDJk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.vXqzYQ7Jpt ++ cat /tmp/tmp.LR4cE3qDJk ++ rm /tmp/tmp.vXqzYQ7Jpt /tmp/tmp.LR4cE3qDJk ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 16 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.nY5Y9mRPCt +++ mktemp ++ local LAST_ERR=/tmp/tmp.uDMeFPr3P2 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.nY5Y9mRPCt ++ cat /tmp/tmp.uDMeFPr3P2 ++ rm /tmp/tmp.nY5Y9mRPCt /tmp/tmp.uDMeFPr3P2 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 17 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.b1WeBHo1K3 +++ mktemp ++ local LAST_ERR=/tmp/tmp.0XuIAKPqPO ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.b1WeBHo1K3 ++ cat /tmp/tmp.0XuIAKPqPO ++ rm /tmp/tmp.b1WeBHo1K3 /tmp/tmp.0XuIAKPqPO ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongos_cmd find myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 -sharded + local command=find + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local postfix=-sharded + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' + run_mongos 'use myApp\n db.test.find()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.wDLJyUAllH +++ mktemp ++ local LAST_ERR=/tmp/tmp.Htig6VlrxC ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.wDLJyUAllH ++ cat /tmp/tmp.Htig6VlrxC ++ rm /tmp/tmp.wDLJyUAllH /tmp/tmp.Htig6VlrxC ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.zZ1MjAxZkA ++ mktemp + local LAST_ERR=/tmp/tmp.rZScJP0m8P + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.zZ1MjAxZkA + cat /tmp/tmp.rZScJP0m8P + rm /tmp/tmp.zZ1MjAxZkA /tmp/tmp.rZScJP0m8P + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/find-sharded.json /tmp/tmp.ziWkPNXLPI/find-sharded + echo + set -o xtrace + check_exported_mongos_service_endpoint 34.170.230.148 + local host=34.170.230.148 ++ kubectl_bin get psmdb some-name '-o=jsonpath={.status.host}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.cTusRiqFag +++ mktemp ++ local LAST_ERR=/tmp/tmp.NW500kLcHU ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name '-o=jsonpath={.status.host}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.cTusRiqFag ++ cat /tmp/tmp.NW500kLcHU ++ rm /tmp/tmp.cTusRiqFag /tmp/tmp.NW500kLcHU ++ return 0 + '[' 34.170.230.148 '!=' 34.170.230.148 ']' + echo 'drop collection' drop collection + run_mongos 'use myApp\n db.test.drop()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local 'command=use myApp\n db.test.drop()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.x2lC9ZRRD2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.MdslqInolD ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.x2lC9ZRRD2 ++ cat /tmp/tmp.MdslqInolD ++ rm /tmp/tmp.x2lC9ZRRD2 /tmp/tmp.MdslqInolD ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.JnIJsiGNxi ++ mktemp + local LAST_ERR=/tmp/tmp.EPEzKpf3wL + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.JnIJsiGNxi Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("e917b6e9-31ff-4c0e-a23d-97abd580ac8c") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye + cat /tmp/tmp.EPEzKpf3wL + rm /tmp/tmp.JnIJsiGNxi /tmp/tmp.EPEzKpf3wL + return 0 + echo 'check backup and restore -- gcp-cs' check backup and restore -- gcp-cs + run_restore backup-gcp-cs-sharded _restore_sharded + local backup_name=backup-gcp-cs-sharded + cat /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/conf/restore.yml + /usr/bin/sed -e 's/name:/name: restore-backup-gcp-cs-sharded/' + /usr/bin/sed -e 's/backupName:/backupName: backup-gcp-cs-sharded/' + kubectl_bin apply -f - ++ mktemp + local LAST_OUT=/tmp/tmp.YwKf8Ygrdx ++ mktemp + local LAST_ERR=/tmp/tmp.AOuyrM0r8A + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl apply -f - + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.YwKf8Ygrdx perconaservermongodbrestore.psmdb.percona.com/restore-backup-gcp-cs-sharded created + cat /tmp/tmp.AOuyrM0r8A + rm /tmp/tmp.YwKf8Ygrdx /tmp/tmp.AOuyrM0r8A + return 0 + run_recovery_check backup-gcp-cs-sharded _restore_sharded + local backup_name=backup-gcp-cs-sharded + local compare_suffix=_restore_sharded + wait_restore backup-gcp-cs-sharded some-name requested 0 1200 + local backup_name=backup-gcp-cs-sharded + local cluster_name=some-name + local target_state=requested + local wait_cluster_consistency=0 + local wait_time=1200 + set +o xtrace waiting psmdb-restore/backup-gcp-cs-sharded to reach requested state............................................................................................................................................ + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore_sharded + local resource=statefulset/some-name-rs0 + local postfix=_restore_sharded + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml + local new_result=/tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-sharded-23638", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ mktemp + local LAST_OUT=/tmp/tmp.HuaEIiwrk8 ++ mktemp + local LAST_ERR=/tmp/tmp.OttFustiJc + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.HuaEIiwrk8 + cat /tmp/tmp.OttFustiJc + rm /tmp/tmp.HuaEIiwrk8 /tmp/tmp.OttFustiJc + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + wait_restore backup-gcp-cs-sharded some-name ready 0 1800 + local backup_name=backup-gcp-cs-sharded + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-gcp-cs-sharded to reach ready state........................................................................ + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.ofOboWD6pX ++ mktemp + local LAST_ERR=/tmp/tmp.FUTezupSUz + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.ofOboWD6pX apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"name":"some-name","namespace":"demand-backup-physical-sharded-23638"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical-sharded"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"sharding":{"configsvrReplSet":{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}},"enabled":true,"mongos":{"affinity":{"antiAffinityTopologyKey":"none"},"expose":{"exposeType":"LoadBalancer"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3}},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:01:06Z" generation: 2 name: some-name namespace: demand-backup-physical-sharded-23638 resourceVersion: "17939" uid: bb928bff-3e1d-44d8-b03a-cc91e5e421e9 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical-sharded type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users sharding: configsvrReplSet: affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi enabled: true mongos: affinity: antiAffinityTopologyKey: none expose: exposeType: LoadBalancer resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:10:09Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:10:09Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:20:51Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:20:51Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:21:17Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:21:17Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:21:33Z" reason: MongosReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:21:52Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:22:28Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:22:28Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:22:30Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:22:30Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:23:08Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:08Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:23:15Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:15Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:23:44Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:43Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:23:50Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:50Z" status: "True" type: initializing host: 34.170.230.148 mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 mongos: ready: 0 size: 0 status: initializing observedGeneration: 2 ready: 0 replsets: cfg: initialized: true ready: 0 size: 3 status: initializing rs0: added_as_shard: true initialized: true ready: 0 size: 3 status: initializing size: 6 state: initializing + cat /tmp/tmp.FUTezupSUz + rm /tmp/tmp.ofOboWD6pX /tmp/tmp.FUTezupSUz + return 0 ++ kubectl_bin get psmdb some-name -o yaml ++ yq '.metadata.annotations."percona.com/resync-pbm"' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Hpe6xnfqJR +++ mktemp ++ local LAST_ERR=/tmp/tmp.nFZeBNfBZe ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Hpe6xnfqJR ++ cat /tmp/tmp.nFZeBNfBZe ++ rm /tmp/tmp.Hpe6xnfqJR /tmp/tmp.nFZeBNfBZe ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.MVhAhFAk4r +++ mktemp ++ local LAST_ERR=/tmp/tmp.xds6VDNyvc ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.MVhAhFAk4r ++ cat /tmp/tmp.xds6VDNyvc ++ rm /tmp/tmp.MVhAhFAk4r /tmp/tmp.xds6VDNyvc ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.FMUoEILmUx +++ mktemp ++ local LAST_ERR=/tmp/tmp.y0HGqnMjFE ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.FMUoEILmUx ++ cat /tmp/tmp.y0HGqnMjFE ++ rm /tmp/tmp.FMUoEILmUx /tmp/tmp.y0HGqnMjFE ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.UDd3yQMoNp +++ mktemp ++ local LAST_ERR=/tmp/tmp.VEzkG1VGI3 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.UDd3yQMoNp ++ cat /tmp/tmp.VEzkG1VGI3 ++ rm /tmp/tmp.UDd3yQMoNp /tmp/tmp.VEzkG1VGI3 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.wZIoRG8nc4 +++ mktemp ++ local LAST_ERR=/tmp/tmp.q5jk0yTxNV ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.wZIoRG8nc4 ++ cat /tmp/tmp.q5jk0yTxNV ++ rm /tmp/tmp.wZIoRG8nc4 /tmp/tmp.q5jk0yTxNV ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.BhP7V3yMbz +++ mktemp ++ local LAST_ERR=/tmp/tmp.g01vOquEVk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.BhP7V3yMbz ++ cat /tmp/tmp.g01vOquEVk ++ rm /tmp/tmp.BhP7V3yMbz /tmp/tmp.g01vOquEVk ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.4FRWKkoxCv +++ mktemp ++ local LAST_ERR=/tmp/tmp.g5vyEhV08c ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.4FRWKkoxCv ++ cat /tmp/tmp.g5vyEhV08c ++ rm /tmp/tmp.4FRWKkoxCv /tmp/tmp.g5vyEhV08c ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.zL0qMaa2bD +++ mktemp ++ local LAST_ERR=/tmp/tmp.dJXrSEdLit ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.zL0qMaa2bD ++ cat /tmp/tmp.dJXrSEdLit ++ rm /tmp/tmp.zL0qMaa2bD /tmp/tmp.dJXrSEdLit ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.3qb11JfIqL +++ mktemp ++ local LAST_ERR=/tmp/tmp.MtyqC2dpKA ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.3qb11JfIqL ++ cat /tmp/tmp.MtyqC2dpKA ++ rm /tmp/tmp.3qb11JfIqL /tmp/tmp.MtyqC2dpKA ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.cu6OmBXwjT +++ mktemp ++ local LAST_ERR=/tmp/tmp.MJDVtZtGj9 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.cu6OmBXwjT ++ cat /tmp/tmp.MJDVtZtGj9 ++ rm /tmp/tmp.cu6OmBXwjT /tmp/tmp.MJDVtZtGj9 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.OEtaPEh2HZ +++ mktemp ++ local LAST_ERR=/tmp/tmp.wcayUup5pp ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.OEtaPEh2HZ ++ cat /tmp/tmp.wcayUup5pp ++ rm /tmp/tmp.OEtaPEh2HZ /tmp/tmp.wcayUup5pp ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.yg1dNjQqvb +++ mktemp ++ local LAST_ERR=/tmp/tmp.SWvYSdt6fk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.yg1dNjQqvb ++ cat /tmp/tmp.SWvYSdt6fk ++ rm /tmp/tmp.yg1dNjQqvb /tmp/tmp.SWvYSdt6fk ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.0UKCZIhhvj +++ mktemp ++ local LAST_ERR=/tmp/tmp.EI9HV7lY5O ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.0UKCZIhhvj ++ cat /tmp/tmp.EI9HV7lY5O ++ rm /tmp/tmp.0UKCZIhhvj /tmp/tmp.EI9HV7lY5O ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.RoAkHpw9QW +++ mktemp ++ local LAST_ERR=/tmp/tmp.yqxzx5nJBk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.RoAkHpw9QW ++ cat /tmp/tmp.yqxzx5nJBk ++ rm /tmp/tmp.RoAkHpw9QW /tmp/tmp.yqxzx5nJBk ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.z3trg1AK5M +++ mktemp ++ local LAST_ERR=/tmp/tmp.9FWj9sRiie ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.z3trg1AK5M ++ cat /tmp/tmp.9FWj9sRiie ++ rm /tmp/tmp.z3trg1AK5M /tmp/tmp.9FWj9sRiie ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 14 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.eb4CkuN1bd +++ mktemp ++ local LAST_ERR=/tmp/tmp.9rpP4jVUJv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.eb4CkuN1bd ++ cat /tmp/tmp.9rpP4jVUJv ++ rm /tmp/tmp.eb4CkuN1bd /tmp/tmp.9rpP4jVUJv ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 15 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.CLND65QVA1 +++ mktemp ++ local LAST_ERR=/tmp/tmp.hDf6MPESjc ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.CLND65QVA1 ++ cat /tmp/tmp.hDf6MPESjc ++ rm /tmp/tmp.CLND65QVA1 /tmp/tmp.hDf6MPESjc ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 16 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.j1iT7DAQ8L +++ mktemp ++ local LAST_ERR=/tmp/tmp.qzFTnlJfWi ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.j1iT7DAQ8L ++ cat /tmp/tmp.qzFTnlJfWi ++ rm /tmp/tmp.j1iT7DAQ8L /tmp/tmp.qzFTnlJfWi ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongos_cmd find myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 -sharded + local command=find + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local postfix=-sharded + local suffix= + local database=myApp + local collection=test + egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + run_mongos 'use myApp\n db.test.find()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ local LAST_OUT=/tmp/tmp.j9cQz8NmUI +++ mktemp ++ local LAST_ERR=/tmp/tmp.58BKLMAQwW ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.j9cQz8NmUI ++ cat /tmp/tmp.58BKLMAQwW ++ rm /tmp/tmp.j9cQz8NmUI /tmp/tmp.58BKLMAQwW ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.LhEiOAYERN ++ mktemp + local LAST_ERR=/tmp/tmp.FA33iAUgZO + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.LhEiOAYERN + cat /tmp/tmp.FA33iAUgZO + rm /tmp/tmp.LhEiOAYERN /tmp/tmp.FA33iAUgZO + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/find-sharded.json /tmp/tmp.ziWkPNXLPI/find-sharded + echo + set -o xtrace + check_exported_mongos_service_endpoint 34.170.230.148 + local host=34.170.230.148 ++ kubectl_bin get psmdb some-name '-o=jsonpath={.status.host}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.sG2jPT3MtU +++ mktemp ++ local LAST_ERR=/tmp/tmp.q2nzMWlAgI ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name '-o=jsonpath={.status.host}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sG2jPT3MtU ++ cat /tmp/tmp.q2nzMWlAgI ++ rm /tmp/tmp.sG2jPT3MtU /tmp/tmp.q2nzMWlAgI ++ return 0 + '[' 34.170.230.148 '!=' 34.170.230.148 ']' + echo 'drop collection' drop collection + run_mongos 'use myApp\n db.test.drop()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local 'command=use myApp\n db.test.drop()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.fPWuhqj6Zl +++ mktemp ++ local LAST_ERR=/tmp/tmp.cKHXv2x5kP ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.fPWuhqj6Zl ++ cat /tmp/tmp.cKHXv2x5kP ++ rm /tmp/tmp.fPWuhqj6Zl /tmp/tmp.cKHXv2x5kP ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.CpVFt2EGuc ++ mktemp + local LAST_ERR=/tmp/tmp.WhXLzUZEXG + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.CpVFt2EGuc Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("3689bc58-6429-41f6-ba00-43c4cf41d608") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye + cat /tmp/tmp.WhXLzUZEXG + rm /tmp/tmp.CpVFt2EGuc /tmp/tmp.WhXLzUZEXG + return 0 + echo 'check backup and restore -- azure-blob' check backup and restore -- azure-blob + run_restore backup-azure-blob-sharded _restore_sharded + local backup_name=backup-azure-blob-sharded + /usr/bin/sed -e 's/name:/name: restore-backup-azure-blob-sharded/' + kubectl_bin apply -f - + cat /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/conf/restore.yml ++ mktemp + local LAST_OUT=/tmp/tmp.qJ4RVoWNga ++ mktemp + local LAST_ERR=/tmp/tmp.ErUOi4nN5o + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl apply -f - + /usr/bin/sed -e 's/backupName:/backupName: backup-azure-blob-sharded/' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.qJ4RVoWNga perconaservermongodbrestore.psmdb.percona.com/restore-backup-azure-blob-sharded created + cat /tmp/tmp.ErUOi4nN5o + rm /tmp/tmp.qJ4RVoWNga /tmp/tmp.ErUOi4nN5o + return 0 + run_recovery_check backup-azure-blob-sharded _restore_sharded + local backup_name=backup-azure-blob-sharded + local compare_suffix=_restore_sharded + wait_restore backup-azure-blob-sharded some-name requested 0 1200 + local backup_name=backup-azure-blob-sharded + local cluster_name=some-name + local target_state=requested + local wait_cluster_consistency=0 + local wait_time=1200 + set +o xtrace waiting psmdb-restore/backup-azure-blob-sharded to reach requested state.................................................................................................... + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore_sharded + local resource=statefulset/some-name-rs0 + local postfix=_restore_sharded + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml + local new_result=/tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-sharded-23638", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ mktemp + local LAST_OUT=/tmp/tmp.QCVdQTcTlC ++ mktemp + local LAST_ERR=/tmp/tmp.u8hAErutG3 + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.QCVdQTcTlC + cat /tmp/tmp.u8hAErutG3 + rm /tmp/tmp.QCVdQTcTlC /tmp/tmp.u8hAErutG3 + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + wait_restore backup-azure-blob-sharded some-name ready 0 1800 + local backup_name=backup-azure-blob-sharded + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-azure-blob-sharded to reach ready state........................................................................... + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.buKzMqvjQu ++ mktemp + local LAST_ERR=/tmp/tmp.UqpAGFIinO + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.buKzMqvjQu apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"name":"some-name","namespace":"demand-backup-physical-sharded-23638"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical-sharded"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"sharding":{"configsvrReplSet":{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}},"enabled":true,"mongos":{"affinity":{"antiAffinityTopologyKey":"none"},"expose":{"exposeType":"LoadBalancer"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3}},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:01:06Z" generation: 2 name: some-name namespace: demand-backup-physical-sharded-23638 resourceVersion: "22989" uid: bb928bff-3e1d-44d8-b03a-cc91e5e421e9 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical-sharded type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users sharding: configsvrReplSet: affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi enabled: true mongos: affinity: antiAffinityTopologyKey: none expose: exposeType: LoadBalancer resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:23:50Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:23:50Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:31:30Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:31:30Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:31:53Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:31:53Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:32:02Z" reason: MongosReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:32:30Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:33:05Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:33:05Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:33:06Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:33:06Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:33:37Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:33:37Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:33:44Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:33:44Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:34:11Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:34:11Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:34:18Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:34:18Z" status: "True" type: initializing host: 34.170.230.148 mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 mongos: ready: 0 size: 0 status: initializing observedGeneration: 2 ready: 0 replsets: cfg: initialized: true ready: 0 size: 3 status: initializing rs0: added_as_shard: true initialized: true ready: 0 size: 3 status: initializing size: 6 state: initializing + cat /tmp/tmp.UqpAGFIinO + rm /tmp/tmp.buKzMqvjQu /tmp/tmp.UqpAGFIinO + return 0 ++ yq '.metadata.annotations."percona.com/resync-pbm"' ++ kubectl_bin get psmdb some-name -o yaml +++ mktemp ++ local LAST_OUT=/tmp/tmp.5WsRoaviU3 +++ mktemp ++ local LAST_ERR=/tmp/tmp.0yhsSzuVSx ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.5WsRoaviU3 ++ cat /tmp/tmp.0yhsSzuVSx ++ rm /tmp/tmp.5WsRoaviU3 /tmp/tmp.0yhsSzuVSx ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.HyXIQkeOlA +++ mktemp ++ local LAST_ERR=/tmp/tmp.ODLnQLSNG3 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.HyXIQkeOlA ++ cat /tmp/tmp.ODLnQLSNG3 ++ rm /tmp/tmp.HyXIQkeOlA /tmp/tmp.ODLnQLSNG3 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Ez0TfzoKDj +++ mktemp ++ local LAST_ERR=/tmp/tmp.Nm6ScIE8uF ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Ez0TfzoKDj ++ cat /tmp/tmp.Nm6ScIE8uF ++ rm /tmp/tmp.Ez0TfzoKDj /tmp/tmp.Nm6ScIE8uF ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.HXr6wNIs6Y +++ mktemp ++ local LAST_ERR=/tmp/tmp.j9K2B5TIwu ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.HXr6wNIs6Y ++ cat /tmp/tmp.j9K2B5TIwu ++ rm /tmp/tmp.HXr6wNIs6Y /tmp/tmp.j9K2B5TIwu ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.D27b5UceJy +++ mktemp ++ local LAST_ERR=/tmp/tmp.xDPgkeZKBg ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.D27b5UceJy ++ cat /tmp/tmp.xDPgkeZKBg ++ rm /tmp/tmp.D27b5UceJy /tmp/tmp.xDPgkeZKBg ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Ds1P2WjMXK +++ mktemp ++ local LAST_ERR=/tmp/tmp.edZyuOQJuT ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Ds1P2WjMXK ++ cat /tmp/tmp.edZyuOQJuT ++ rm /tmp/tmp.Ds1P2WjMXK /tmp/tmp.edZyuOQJuT ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.7lb70Y0juB +++ mktemp ++ local LAST_ERR=/tmp/tmp.lOomGwsdCg ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.7lb70Y0juB ++ cat /tmp/tmp.lOomGwsdCg ++ rm /tmp/tmp.7lb70Y0juB /tmp/tmp.lOomGwsdCg ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.1tESER3986 +++ mktemp ++ local LAST_ERR=/tmp/tmp.RcJIWcZO4z ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.1tESER3986 ++ cat /tmp/tmp.RcJIWcZO4z ++ rm /tmp/tmp.1tESER3986 /tmp/tmp.RcJIWcZO4z ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.6moHaywRym +++ mktemp ++ local LAST_ERR=/tmp/tmp.lUnG71Qf4f ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.6moHaywRym ++ cat /tmp/tmp.lUnG71Qf4f ++ rm /tmp/tmp.6moHaywRym /tmp/tmp.lUnG71Qf4f ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.thDWhpyWAt +++ mktemp ++ local LAST_ERR=/tmp/tmp.rKTlMJBzZT ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.thDWhpyWAt ++ cat /tmp/tmp.rKTlMJBzZT ++ rm /tmp/tmp.thDWhpyWAt /tmp/tmp.rKTlMJBzZT ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Uq2HMmYKhR +++ mktemp ++ local LAST_ERR=/tmp/tmp.u1SiuJyYtl ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Uq2HMmYKhR ++ cat /tmp/tmp.u1SiuJyYtl ++ rm /tmp/tmp.Uq2HMmYKhR /tmp/tmp.u1SiuJyYtl ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.0odlqp7kLd +++ mktemp ++ local LAST_ERR=/tmp/tmp.DN4Mn8kWUQ ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.0odlqp7kLd ++ cat /tmp/tmp.DN4Mn8kWUQ ++ rm /tmp/tmp.0odlqp7kLd /tmp/tmp.DN4Mn8kWUQ ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.EjJ24OUpNu +++ mktemp ++ local LAST_ERR=/tmp/tmp.YNBr7qBQNw ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.EjJ24OUpNu ++ cat /tmp/tmp.YNBr7qBQNw ++ rm /tmp/tmp.EjJ24OUpNu /tmp/tmp.YNBr7qBQNw ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.68wtTFIepl +++ mktemp ++ local LAST_ERR=/tmp/tmp.7elpCaUI3k ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.68wtTFIepl ++ cat /tmp/tmp.7elpCaUI3k ++ rm /tmp/tmp.68wtTFIepl /tmp/tmp.7elpCaUI3k ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ipLjX1tWJW +++ mktemp ++ local LAST_ERR=/tmp/tmp.9zo1fIkayq ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ipLjX1tWJW ++ cat /tmp/tmp.9zo1fIkayq ++ rm /tmp/tmp.ipLjX1tWJW /tmp/tmp.9zo1fIkayq ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 14 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.UxAFRDZtFw +++ mktemp ++ local LAST_ERR=/tmp/tmp.cIJAiUXxhz ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.UxAFRDZtFw ++ cat /tmp/tmp.cIJAiUXxhz ++ rm /tmp/tmp.UxAFRDZtFw /tmp/tmp.cIJAiUXxhz ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 15 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.B8kphaE5hI +++ mktemp ++ local LAST_ERR=/tmp/tmp.Ju6t5m12wp ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.B8kphaE5hI ++ cat /tmp/tmp.Ju6t5m12wp ++ rm /tmp/tmp.B8kphaE5hI /tmp/tmp.Ju6t5m12wp ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 16 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.p1Xbt7HCek +++ mktemp ++ local LAST_ERR=/tmp/tmp.B9TnKdVFqv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.p1Xbt7HCek ++ cat /tmp/tmp.B9TnKdVFqv ++ rm /tmp/tmp.p1Xbt7HCek /tmp/tmp.B9TnKdVFqv ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongos_cmd find myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 -sharded + local command=find + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local postfix=-sharded + local suffix= + local database=myApp + local collection=test + run_mongos 'use myApp\n db.test.find()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local driver=mongodb + local suffix=.svc.cluster.local + egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ local LAST_OUT=/tmp/tmp.kQVD4QjiLG +++ mktemp ++ local LAST_ERR=/tmp/tmp.CqcEP5KzH8 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.kQVD4QjiLG ++ cat /tmp/tmp.CqcEP5KzH8 ++ rm /tmp/tmp.kQVD4QjiLG /tmp/tmp.CqcEP5KzH8 ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.OCwTNr9zQz ++ mktemp + local LAST_ERR=/tmp/tmp.WzVmcOswMq + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.OCwTNr9zQz + cat /tmp/tmp.WzVmcOswMq + rm /tmp/tmp.OCwTNr9zQz /tmp/tmp.WzVmcOswMq + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/find-sharded.json /tmp/tmp.ziWkPNXLPI/find-sharded + echo + set -o xtrace + check_exported_mongos_service_endpoint 34.170.230.148 + local host=34.170.230.148 ++ kubectl_bin get psmdb some-name '-o=jsonpath={.status.host}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.QmBL90WtQi +++ mktemp ++ local LAST_ERR=/tmp/tmp.weZlqDntyf ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name '-o=jsonpath={.status.host}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.QmBL90WtQi ++ cat /tmp/tmp.weZlqDntyf ++ rm /tmp/tmp.QmBL90WtQi /tmp/tmp.weZlqDntyf ++ return 0 + '[' 34.170.230.148 '!=' 34.170.230.148 ']' + echo 'drop collection' drop collection + run_mongos 'use myApp\n db.test.drop()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local 'command=use myApp\n db.test.drop()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local driver=mongodb + local suffix=.svc.cluster.local ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.mbbcIPkJo2 +++ mktemp ++ local LAST_ERR=/tmp/tmp.j59p6TqXbk ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.mbbcIPkJo2 ++ cat /tmp/tmp.j59p6TqXbk ++ rm /tmp/tmp.mbbcIPkJo2 /tmp/tmp.j59p6TqXbk ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.FGvrOHVq3M ++ mktemp + local LAST_ERR=/tmp/tmp.CT0vrp4Yze + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.drop()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.FGvrOHVq3M Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local:27017/admin?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("1810add2-2d4d-4fd7-8ef9-929a524a6f7e") } Percona Server for MongoDB server version: v7.0.8-5 WARNING: shell and server versions do not match switched to db myApp true bye + cat /tmp/tmp.CT0vrp4Yze + rm /tmp/tmp.FGvrOHVq3M /tmp/tmp.CT0vrp4Yze + return 0 + echo 'check backup and restore -- minio' check backup and restore -- minio ++ get_backup_dest backup-minio-sharded ++ local backup_name=backup-minio-sharded ++ kubectl_bin get psmdb-backup backup-minio-sharded -o 'jsonpath={.status.destination}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.a1maYW6uy3 +++ mktemp ++ local LAST_ERR=/tmp/tmp.nrjyopjWwc ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb-backup backup-minio-sharded -o 'jsonpath={.status.destination}' ++ sed -e 's/.json$//' ++ sed 's|s3://||' ++ sed 's|azure://||' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.a1maYW6uy3 ++ cat /tmp/tmp.nrjyopjWwc ++ rm /tmp/tmp.a1maYW6uy3 /tmp/tmp.nrjyopjWwc ++ return 0 + backup_dest_minio=operator-testing/2024-05-07T11:05:24Z + run_restore backup-minio-sharded _restore_sharded + local backup_name=backup-minio-sharded + cat /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/conf/restore.yml + /usr/bin/sed -e 's/name:/name: restore-backup-minio-sharded/' + kubectl_bin apply -f - + /usr/bin/sed -e 's/backupName:/backupName: backup-minio-sharded/' ++ mktemp + local LAST_OUT=/tmp/tmp.YXcFqmckZT ++ mktemp + local LAST_ERR=/tmp/tmp.FqZb1sc7gP + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl apply -f - + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.YXcFqmckZT perconaservermongodbrestore.psmdb.percona.com/restore-backup-minio-sharded created + cat /tmp/tmp.FqZb1sc7gP + rm /tmp/tmp.YXcFqmckZT /tmp/tmp.FqZb1sc7gP + return 0 + run_recovery_check backup-minio-sharded _restore_sharded + local backup_name=backup-minio-sharded + local compare_suffix=_restore_sharded + wait_restore backup-minio-sharded some-name requested 0 1200 + local backup_name=backup-minio-sharded + local cluster_name=some-name + local target_state=requested + local wait_cluster_consistency=0 + local wait_time=1200 + set +o xtrace waiting psmdb-restore/backup-minio-sharded to reach requested state.......................................................... + '[' 0 -eq 1 ']' + echo + compare_kubectl statefulset/some-name-rs0 _restore_sharded + local resource=statefulset/some-name-rs0 + local postfix=_restore_sharded + local expected_result=/mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml + local new_result=/tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + '[' -n '' -a -f /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded-oc.yml ']' + kubectl_bin get -o yaml statefulset/some-name-rs0 + yq eval ' del(.metadata.ownerReferences[].apiVersion) | del(.metadata.managedFields) | del(.. | select(has("creationTimestamp")).creationTimestamp) | del(.. | select(has("namespace")).namespace) | del(.. | select(has("uid")).uid) | del(.metadata.resourceVersion) | del(.spec.template.spec.containers[].env[] | select(.name == "NAMESPACE")) | del(.metadata.selfLink) | del(.metadata.annotations."cloud.google.com/neg") | del(.. | select(has("image")).image) | del(.. | select(has("clusterIP")).clusterIP) | del(.. | select(has("clusterIPs")).clusterIPs) | del(.. | select(has("dataSource")).dataSource) | del(.. | select(has("procMount")).procMount) | del(.. | select(has("storageClassName")).storageClassName) | del(.. | select(has("finalizers")).finalizers) | del(.. | select(has("kubernetes.io/pvc-protection"))."kubernetes.io/pvc-protection") | del(.. | select(has("volumeName")).volumeName) | del(.. | select(has("volume.beta.kubernetes.io/storage-provisioner"))."volume.beta.kubernetes.io/storage-provisioner") | del(.. | select(has("volume.kubernetes.io/storage-provisioner"))."volume.kubernetes.io/storage-provisioner") | del(.spec.volumeMode) | del(.. | select(has("volume.kubernetes.io/selected-node"))."volume.kubernetes.io/selected-node") | del(.. | select(has("percona.com/last-config-hash"))."percona.com/last-config-hash") | del(.. | select(has("percona.com/configuration-hash"))."percona.com/configuration-hash") | del(.. | select(has("percona.com/ssl-hash"))."percona.com/ssl-hash") | del(.. | select(has("percona.com/ssl-internal-hash"))."percona.com/ssl-internal-hash") | del(.spec.volumeClaimTemplates[].spec.volumeMode | select(. == "Filesystem")) | del(.. | select(has("healthCheckNodePort")).healthCheckNodePort) | del(.. | select(has("nodePort")).nodePort) | del(.status) | (.. | select(tag == "!!str")) |= sub("demand-backup-physical-sharded-23638", "NAME_SPACE") | del(.spec.volumeClaimTemplates[].apiVersion) | del(.spec.volumeClaimTemplates[].kind) | del(.spec.ipFamilies) | del(.spec.ipFamilyPolicy) | (.. | select(. == "extensions/v1beta1")) = "apps/v1" | (.. | select(. == "batch/v1beta1")) = "batch/v1" ' - ++ mktemp + local LAST_OUT=/tmp/tmp.lN0XfxwjcD ++ mktemp + local LAST_ERR=/tmp/tmp.uebmfhvGcz + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get -o yaml statefulset/some-name-rs0 + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.lN0XfxwjcD + cat /tmp/tmp.uebmfhvGcz + rm /tmp/tmp.lN0XfxwjcD /tmp/tmp.uebmfhvGcz + return 0 + yq -i eval 'del(.spec.persistentVolumeClaimRetentionPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + version_gt 1.22 ++ echo '1.26 >= 1.22' ++ bc -l + '[' 1 -eq 1 ']' + return 0 + yq -i eval 'del(.spec.internalTrafficPolicy)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + yq -i eval 'del(.spec.allocateLoadBalancerNodePorts)' /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + [[ /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml == */cronjob* ]] + diff -u /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/statefulset_some-name-rs0_restore_sharded.yml /tmp/tmp.ziWkPNXLPI/statefulset_some-name-rs0.yml + wait_restore backup-minio-sharded some-name ready 0 1800 + local backup_name=backup-minio-sharded + local cluster_name=some-name + local target_state=ready + local wait_cluster_consistency=0 + local wait_time=1800 + set +o xtrace waiting psmdb-restore/backup-minio-sharded to reach ready state.......................................................................... + '[' 0 -eq 1 ']' + kubectl_bin get psmdb some-name -o yaml ++ mktemp + local LAST_OUT=/tmp/tmp.rnFksC7CmN ++ mktemp + local LAST_ERR=/tmp/tmp.eAbx8IJaXs + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl get psmdb some-name -o yaml + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.rnFksC7CmN apiVersion: psmdb.percona.com/v1 kind: PerconaServerMongoDB metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"psmdb.percona.com/v1","kind":"PerconaServerMongoDB","metadata":{"annotations":{},"name":"some-name","namespace":"demand-backup-physical-sharded-23638"},"spec":{"backup":{"enabled":true,"image":"perconalab/percona-server-mongodb-operator:main-backup","storages":{"aws-s3":{"s3":{"bucket":"operator-testing","credentialsSecret":"aws-s3-secret","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"azure-blob":{"azure":{"container":"operator-testing","credentialsSecret":"azure-secret","prefix":"psmdb-demand-backup-physical-sharded"},"type":"azure"},"gcp-cs":{"s3":{"bucket":"operator-testing","credentialsSecret":"gcp-cs-secret","endpointUrl":"https://storage.googleapis.com","insecureSkipTLSVerify":false,"prefix":"psmdb-demand-backup-physical-sharded","region":"us-east-1"},"type":"s3"},"minio":{"s3":{"bucket":"operator-testing","credentialsSecret":"minio-secret","endpointUrl":"http://minio-service:9000/","insecureSkipTLSVerify":false,"region":"us-east-1"},"type":"s3"}},"tasks":[{"compressionType":"gzip","enabled":true,"name":"weekly","schedule":"0 0 * * 0","storageName":"aws-s3"}]},"image":"perconalab/percona-server-mongodb-operator:main-mongod7.0","imagePullPolicy":"Always","replsets":[{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"name":"rs0","resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}}],"secrets":{"users":"some-users"},"sharding":{"configsvrReplSet":{"affinity":{"antiAffinityTopologyKey":"none"},"configuration":"operationProfiling:\n mode: slowOp\n slowOpThresholdMs: 100\nsecurity:\n enableEncryption: true\n redactClientLogData: false\nsetParameter:\n ttlMonitorSleepSecs: 60\n wiredTigerConcurrentReadTransactions: 128\n wiredTigerConcurrentWriteTransactions: 128\nstorage:\n engine: wiredTiger\n wiredTiger:\n collectionConfig:\n blockCompressor: snappy\n engineConfig:\n directoryForIndexes: false\n journalCompressor: snappy\n indexConfig:\n prefixCompression: true\n","expose":{"enabled":false,"exposeType":"ClusterIP"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3,"volumeSpec":{"persistentVolumeClaim":{"resources":{"requests":{"storage":"3Gi"}}}}},"enabled":true,"mongos":{"affinity":{"antiAffinityTopologyKey":"none"},"expose":{"exposeType":"LoadBalancer"},"resources":{"limits":{"cpu":"500m","memory":"1G"},"requests":{"cpu":"100m","memory":"0.1G"}},"size":3}},"upgradeOptions":{"apply":"Never"}}} percona.com/resync-pbm: "true" creationTimestamp: "2024-05-07T11:01:06Z" generation: 2 name: some-name namespace: demand-backup-physical-sharded-23638 resourceVersion: "27277" uid: bb928bff-3e1d-44d8-b03a-cc91e5e421e9 spec: backup: enabled: true image: perconalab/percona-server-mongodb-operator:main-backup storages: aws-s3: s3: bucket: operator-testing credentialsSecret: aws-s3-secret insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 azure-blob: azure: container: operator-testing credentialsSecret: azure-secret prefix: psmdb-demand-backup-physical-sharded type: azure gcp-cs: s3: bucket: operator-testing credentialsSecret: gcp-cs-secret endpointUrl: https://storage.googleapis.com insecureSkipTLSVerify: false prefix: psmdb-demand-backup-physical-sharded region: us-east-1 type: s3 minio: s3: bucket: operator-testing credentialsSecret: minio-secret endpointUrl: http://minio-service:9000/ insecureSkipTLSVerify: false region: us-east-1 type: s3 tasks: - compressionType: gzip enabled: true name: weekly schedule: 0 0 * * 0 storageName: aws-s3 crVersion: 1.16.0 image: perconalab/percona-server-mongodb-operator:main-mongod7.0 imagePullPolicy: Always replsets: - affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP name: rs0 resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi secrets: users: some-users sharding: configsvrReplSet: affinity: antiAffinityTopologyKey: none configuration: | operationProfiling: mode: slowOp slowOpThresholdMs: 100 security: enableEncryption: true redactClientLogData: false setParameter: ttlMonitorSleepSecs: 60 wiredTigerConcurrentReadTransactions: 128 wiredTigerConcurrentWriteTransactions: 128 storage: engine: wiredTiger wiredTiger: collectionConfig: blockCompressor: snappy engineConfig: directoryForIndexes: false journalCompressor: snappy indexConfig: prefixCompression: true expose: enabled: false exposeType: ClusterIP resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 volumeSpec: persistentVolumeClaim: resources: requests: storage: 3Gi enabled: true mongos: affinity: antiAffinityTopologyKey: none expose: exposeType: LoadBalancer resources: limits: cpu: 500m memory: 1G requests: cpu: 100m memory: 0.1G size: 3 upgradeOptions: apply: Never status: conditions: - lastTransitionTime: "2024-05-07T11:33:44Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:33:44Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:34:11Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:34:11Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:34:18Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:34:18Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:40:46Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:40:46Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:41:08Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:41:08Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:41:17Z" reason: MongosReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:41:37Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:42:14Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:42:14Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:42:16Z" message: 'cfg: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:42:16Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:42:48Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:42:48Z" status: "True" type: initializing - lastTransitionTime: "2024-05-07T11:43:22Z" message: 'rs0: ready' reason: RSReady status: "True" type: ready - lastTransitionTime: "2024-05-07T11:43:22Z" status: "True" type: initializing host: 34.170.230.148 mongoImage: perconalab/percona-server-mongodb-operator:main-mongod7.0 mongoVersion: 7.0.8-5 mongos: ready: 0 size: 0 status: initializing observedGeneration: 2 ready: 0 replsets: cfg: initialized: true ready: 0 size: 3 status: initializing rs0: added_as_shard: true initialized: true ready: 0 size: 3 status: initializing size: 6 state: initializing + cat /tmp/tmp.eAbx8IJaXs + rm /tmp/tmp.rnFksC7CmN /tmp/tmp.eAbx8IJaXs + return 0 ++ yq '.metadata.annotations."percona.com/resync-pbm"' ++ kubectl_bin get psmdb some-name -o yaml +++ mktemp ++ local LAST_OUT=/tmp/tmp.CuDVzzlGDJ +++ mktemp ++ local LAST_ERR=/tmp/tmp.iXsrUmtsdf ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o yaml ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.CuDVzzlGDJ ++ cat /tmp/tmp.iXsrUmtsdf ++ rm /tmp/tmp.CuDVzzlGDJ /tmp/tmp.iXsrUmtsdf ++ return 0 + '[' true == null ']' + echo + wait_cluster_consistency some-name + local cluster_name=some-name + local wait_time=32 + retry=0 + sleep 7 + echo -n 'waiting for cluster readyness' waiting for cluster readyness++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.2YFmVzPzyo +++ mktemp ++ local LAST_ERR=/tmp/tmp.qriLuJ2KS0 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.2YFmVzPzyo ++ cat /tmp/tmp.qriLuJ2KS0 ++ rm /tmp/tmp.2YFmVzPzyo /tmp/tmp.qriLuJ2KS0 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 1 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.VAFpORcVMa +++ mktemp ++ local LAST_ERR=/tmp/tmp.QZg177WLRG ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.VAFpORcVMa ++ cat /tmp/tmp.QZg177WLRG ++ rm /tmp/tmp.VAFpORcVMa /tmp/tmp.QZg177WLRG ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 2 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.EL2mF2DkA1 +++ mktemp ++ local LAST_ERR=/tmp/tmp.jZsdTLjANv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.EL2mF2DkA1 ++ cat /tmp/tmp.jZsdTLjANv ++ rm /tmp/tmp.EL2mF2DkA1 /tmp/tmp.jZsdTLjANv ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 3 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.mIascVLgS0 +++ mktemp ++ local LAST_ERR=/tmp/tmp.ntHGX8sIHc ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.mIascVLgS0 ++ cat /tmp/tmp.ntHGX8sIHc ++ rm /tmp/tmp.mIascVLgS0 /tmp/tmp.ntHGX8sIHc ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 4 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.xmkDj0Y61Y +++ mktemp ++ local LAST_ERR=/tmp/tmp.598lp1dkcP ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.xmkDj0Y61Y ++ cat /tmp/tmp.598lp1dkcP ++ rm /tmp/tmp.xmkDj0Y61Y /tmp/tmp.598lp1dkcP ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 5 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.TnfAs4jW2c +++ mktemp ++ local LAST_ERR=/tmp/tmp.hyeHNSx4m3 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.TnfAs4jW2c ++ cat /tmp/tmp.hyeHNSx4m3 ++ rm /tmp/tmp.TnfAs4jW2c /tmp/tmp.hyeHNSx4m3 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 6 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.Ji8VUrmyzs +++ mktemp ++ local LAST_ERR=/tmp/tmp.ifqbe9BbeG ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.Ji8VUrmyzs ++ cat /tmp/tmp.ifqbe9BbeG ++ rm /tmp/tmp.Ji8VUrmyzs /tmp/tmp.ifqbe9BbeG ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 7 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.AoLaRylwLa +++ mktemp ++ local LAST_ERR=/tmp/tmp.irGK7BKsXl ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.AoLaRylwLa ++ cat /tmp/tmp.irGK7BKsXl ++ rm /tmp/tmp.AoLaRylwLa /tmp/tmp.irGK7BKsXl ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 8 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.50EB8O8DYb +++ mktemp ++ local LAST_ERR=/tmp/tmp.Qg1oEZLogn ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.50EB8O8DYb ++ cat /tmp/tmp.Qg1oEZLogn ++ rm /tmp/tmp.50EB8O8DYb /tmp/tmp.Qg1oEZLogn ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 9 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.jdHIOoBnwJ +++ mktemp ++ local LAST_ERR=/tmp/tmp.mfaGybdipG ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.jdHIOoBnwJ ++ cat /tmp/tmp.mfaGybdipG ++ rm /tmp/tmp.jdHIOoBnwJ /tmp/tmp.mfaGybdipG ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 10 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.sj6y8BgvbT +++ mktemp ++ local LAST_ERR=/tmp/tmp.pjrggpm8Rv ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.sj6y8BgvbT ++ cat /tmp/tmp.pjrggpm8Rv ++ rm /tmp/tmp.sj6y8BgvbT /tmp/tmp.pjrggpm8Rv ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 11 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.9l6LmtdZ1x +++ mktemp ++ local LAST_ERR=/tmp/tmp.m1Ozlbhba4 ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.9l6LmtdZ1x ++ cat /tmp/tmp.m1Ozlbhba4 ++ rm /tmp/tmp.9l6LmtdZ1x /tmp/tmp.m1Ozlbhba4 ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 12 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.TpuI9f6pWN +++ mktemp ++ local LAST_ERR=/tmp/tmp.63qsptl3KP ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.TpuI9f6pWN ++ cat /tmp/tmp.63qsptl3KP ++ rm /tmp/tmp.TpuI9f6pWN /tmp/tmp.63qsptl3KP ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 13 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.ymTLWRZxoR +++ mktemp ++ local LAST_ERR=/tmp/tmp.zbqMkSyc7u ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.ymTLWRZxoR ++ cat /tmp/tmp.zbqMkSyc7u ++ rm /tmp/tmp.ymTLWRZxoR /tmp/tmp.zbqMkSyc7u ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 14 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.8bSVQ0QsgO +++ mktemp ++ local LAST_ERR=/tmp/tmp.GFk1R7jVmR ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.8bSVQ0QsgO ++ cat /tmp/tmp.GFk1R7jVmR ++ rm /tmp/tmp.8bSVQ0QsgO /tmp/tmp.GFk1R7jVmR ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 15 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.oQ35vpwOpK +++ mktemp ++ local LAST_ERR=/tmp/tmp.gL7ZI1cXho ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.oQ35vpwOpK ++ cat /tmp/tmp.gL7ZI1cXho ++ rm /tmp/tmp.oQ35vpwOpK /tmp/tmp.gL7ZI1cXho ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 16 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.0ptCijTYzM +++ mktemp ++ local LAST_ERR=/tmp/tmp.2IbZcLWvSD ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.0ptCijTYzM ++ cat /tmp/tmp.2IbZcLWvSD ++ rm /tmp/tmp.0ptCijTYzM /tmp/tmp.2IbZcLWvSD ++ return 0 + [[ initializing == \r\e\a\d\y ]] + let retry+=1 + '[' 17 -ge 32 ']' + echo -n . .+ sleep 10 ++ kubectl_bin get psmdb some-name -o 'jsonpath={.status.state}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.vWKTCvFexk +++ mktemp ++ local LAST_ERR=/tmp/tmp.VfRlEeHxZx ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get psmdb some-name -o 'jsonpath={.status.state}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.vWKTCvFexk ++ cat /tmp/tmp.VfRlEeHxZx ++ rm /tmp/tmp.vWKTCvFexk /tmp/tmp.VfRlEeHxZx ++ return 0 + [[ ready == \r\e\a\d\y ]] + compare_mongos_cmd find myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 -sharded + local command=find + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + local postfix=-sharded + local suffix= + local database=myApp + local collection=test + run_mongos 'use myApp\n db.test.find()' myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 mongodb '' + local 'command=use myApp\n db.test.find()' + local uri=myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638 + egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' + local driver=mongodb + local suffix=.svc.cluster.local + /usr/bin/sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/' ++ kubectl_bin get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' +++ mktemp ++ local LAST_OUT=/tmp/tmp.luCxlKYj1y +++ mktemp ++ local LAST_ERR=/tmp/tmp.Yt5DeQ7rRl ++ local exit_status=0 ++ local timeout=4 +++ seq 0 2 ++ for i in '$(seq 0 2)' ++ set +e ++ kubectl get pods --selector=name=psmdb-client -o 'jsonpath={.items[].metadata.name}' ++ exit_status=0 ++ set -e ++ '[' 0 '!=' 0 -a -n 0 ']' ++ break ++ cat /tmp/tmp.luCxlKYj1y ++ cat /tmp/tmp.Yt5DeQ7rRl ++ rm /tmp/tmp.luCxlKYj1y /tmp/tmp.Yt5DeQ7rRl ++ return 0 + local client_container=psmdb-client-5f578b7f94-kpq79 + local mongo_flag= + kubectl_bin exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' ++ mktemp + local LAST_OUT=/tmp/tmp.P2pCWHfUsp ++ mktemp + local LAST_ERR=/tmp/tmp.3LrEsRcbgN + local exit_status=0 + local timeout=4 ++ seq 0 2 + for i in '$(seq 0 2)' + set +e + kubectl exec psmdb-client-5f578b7f94-kpq79 -- bash -c 'printf '\''use myApp\n db.test.find()\n'\'' | mongo mongodb://myApp:myPass@some-name-mongos.demand-backup-physical-sharded-23638.svc.cluster.local/admin ' + exit_status=0 + set -e + '[' 0 '!=' 0 -a -n 0 ']' + break + cat /tmp/tmp.P2pCWHfUsp + cat /tmp/tmp.3LrEsRcbgN + rm /tmp/tmp.P2pCWHfUsp /tmp/tmp.3LrEsRcbgN + return 0 + diff /mnt/jenkins/workspace/cloud-psmdb-operator_PR-1543/e2e-tests/demand-backup-physical-sharded/compare/find-sharded.json /tmp/tmp.ziWkPNXLPI/find-sharded + echo + set -o xtrace + destroy demand-backup-physical-sharded-23638 + local namespace=demand-backup-physical-sharded-23638 + local ignore_logs=true + desc 'destroy cluster/operator and all other resources' + set +o xtrace ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io "perconaservermongodbbackups.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbrestores.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbs.psmdb.percona.com" deleted + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n demand-backup-physical-sharded-23638 backup-minio-sharded --type=merge -p '{"metadata":{"finalizers":[]}}' perconaservermongodbbackup.psmdb.percona.com/backup-minio-sharded patched customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com condition met error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" clusterrole.rbac.authorization.k8s.io "percona-server-mongodb-operator" deleted clusterrolebinding.rbac.authorization.k8s.io "service-account-percona-server-mongodb-operator" deleted