Log: /mnt/jenkins/workspace/cloud-psmdb-operator_PR-2125/e2e-tests/logs/service-per-pod.log grep: warning: stray \ before - Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- grep: warning: stray \ before - grep: warning: stray \ before - error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbbackups" error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" error: the server doesn't have a resource type "perconaservermongodbs" + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbs" Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "null" not found ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces psmdb-operator ----------------------------------------------------------------------------------- egrep: warning: egrep is obsolescent; using grep -E error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace psmdb-operator ----------------------------------------------------------------------------------- namespace/psmdb-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-2125-8ebcb80f-4-cluster3" modified. ----------------------------------------------------------------------------------- start PSMDB operator: perconalab/percona-server-mongodb-operator:PR-2125-8ebcb80f ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created waiting for pod/percona-server-mongodb-operator-84495cbcf7-bcqkc to be ready.OK Print operator info from log 2025-12-11T17:39:14.718Z INFO setup Manager starting up {"gitCommit": "8ebcb80f1012f36e90d7464b396a5e33442e27be", "gitBranch": "PR-2125-8ebcb80f", "buildTime": "", "goVersion": "go1.25.5", "os": "linux", "arch": "amd64"} ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces service-per-pod-28260 ----------------------------------------------------------------------------------- egrep: warning: egrep is obsolescent; using grep -E error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- create namespace service-per-pod-28260 ----------------------------------------------------------------------------------- namespace/service-per-pod-28260 created Context "gke_cloud-dev-112233_us-central1-a_jen-psmdb-2125-8ebcb80f-4-cluster3" modified. ----------------------------------------------------------------------------------- deploy cert manager ----------------------------------------------------------------------------------- namespace/cert-manager created namespace/cert-manager labeled namespace/cert-manager configured customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-tokenrequest created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-tokenrequest created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager-cainjector created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created Warning: resource namespaces/cert-manager is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/cert-manager-cainjector-5dc9c8b4f7-b8chr condition met pod/cert-manager-df4b69479-cwktf condition met pod/cert-manager-webhook-769bbb594d-z5twh condition met ----------------------------------------------------------------------------------- create secrets and start client ----------------------------------------------------------------------------------- deployment.apps/psmdb-client created secret/some-users created ----------------------------------------------------------------------------------- check ClusterIP ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create PSMDB cluster cluster-ip-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/cluster-ip created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- waiting for pod/cluster-ip-rs0-0 to be ready.................OK waiting for pod/cluster-ip-rs0-1 to be ready................OK waiting for pod/cluster-ip-rs0-2 to be ready..............OK ----------------------------------------------------------------------------------- check if service and statefulset created with expected config ----------------------------------------------------------------------------------- [2025-12-11T17:44:31+0000] compare_kubectl: statefulset/cluster-ip-rs0 OK [2025-12-11T17:44:32+0000] compare_kubectl: service/cluster-ip-rs0-0 OK ----------------------------------------------------------------------------------- create user myApp ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://34.118.232.143:27017,34.118.232.18:27017,34.118.228.248:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("92800590-4ccd-45f6-b5d9-8a51a9155c86") } Percona Server for MongoDB server version: v8.0.16-5 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye ----------------------------------------------------------------------------------- write data, read from all ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://34.118.232.143:27017,34.118.232.18:27017,34.118.228.248:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("4df50cfa-56f3-4d6b-aeab-4aa1b4fdaefa") } Percona Server for MongoDB server version: v8.0.16-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye [2025-12-11T17:46:00+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:46:05+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:46:09+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E ----------------------------------------------------------------------------------- delete PSMDB cluster cluster-ip-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com "cluster-ip" deleted from service-per-pod-28260 namespace ----------------------------------------------------------------------------------- check LoadBalancer ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create PSMDB cluster local-balancer-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/local-balancer created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- waiting for pod/local-balancer-rs0-0 to be ready.........OK waiting for pod/local-balancer-rs0-1 to be ready.......OK waiting for pod/local-balancer-rs0-2 to be ready.......OK ----------------------------------------------------------------------------------- check if service and statefulset created with expected config ----------------------------------------------------------------------------------- [2025-12-11T17:48:02+0000] compare_kubectl: statefulset/local-balancer-rs0 OK [2025-12-11T17:48:03+0000] compare_kubectl: service/local-balancer-rs0-0 OK egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E ----------------------------------------------------------------------------------- create user myApp ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://136.114.115.220:27017,34.118.203.88:27017,34.134.59.197:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("24f8ec0c-f7ed-4ac6-aa9a-57fa16aa990e") } Percona Server for MongoDB server version: v8.0.16-5 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye ----------------------------------------------------------------------------------- write data, read from all ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://136.114.115.220:27017,34.118.203.88:27017,34.134.59.197:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("514008a7-81a6-4b6d-8c7f-42f1297eae58") } Percona Server for MongoDB server version: v8.0.16-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:50:28+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:50:36+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:50:45+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E ----------------------------------------------------------------------------------- delete PSMDB cluster local-balancer-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com "local-balancer" deleted from service-per-pod-28260 namespace ----------------------------------------------------------------------------------- check NodePort ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create PSMDB cluster node-port-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/node-port created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- waiting for pod/node-port-rs0-0 to be ready........OK waiting for pod/node-port-rs0-1 to be ready....OK waiting for pod/node-port-rs0-2 to be ready.......OK ----------------------------------------------------------------------------------- check if service and statefulset created with expected config ----------------------------------------------------------------------------------- [2025-12-11T17:52:17+0000] compare_kubectl: statefulset/node-port-rs0 OK [2025-12-11T17:52:18+0000] compare_kubectl: service/node-port-rs0-0 OK ----------------------------------------------------------------------------------- create user myApp ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://34.118.239.57:27017,34.118.236.162:27017,34.118.230.161:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("96c81358-2386-43a1-8b4c-0513191937f0") } Percona Server for MongoDB server version: v8.0.16-5 WARNING: shell and server versions do not match Successfully added user: { "user" : "myApp", "roles" : [ { "db" : "myApp", "role" : "readWrite" } ] } bye ----------------------------------------------------------------------------------- write data, read from all ----------------------------------------------------------------------------------- Percona Server for MongoDB shell version v4.4.29-28 connecting to: mongodb://34.118.239.57:27017,34.118.236.162:27017,34.118.230.161:27017/admin?compressors=disabled&gssapiServiceName=mongodb&replicaSet=rs0&ssl=false Implicit session: session { "id" : UUID("02a11217-322a-46fa-838a-eba217dab557") } Percona Server for MongoDB server version: v8.0.16-5 WARNING: shell and server versions do not match switched to db myApp WriteResult({ "nInserted" : 1 }) bye [2025-12-11T17:53:45+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:53:51+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E [2025-12-11T17:53:57+0000] running db.test.find() in myApp egrep: warning: egrep is obsolescent; using grep -E ----------------------------------------------------------------------------------- add service-per-pod label and annotation ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/node-port patched ----------------------------------------------------------------------------------- check if service created with expected config ----------------------------------------------------------------------------------- [2025-12-11T17:54:07+0000] compare_kubectl: service/node-port-rs0-0 OK ----------------------------------------------------------------------------------- delete PSMDB cluster node-port-rs0 ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com "node-port" deleted from service-per-pod-28260 namespace ----------------------------------------------------------------------------------- check Mongos in sharded cluster ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name created waiting for pod/some-name-rs0-0 to be ready..........OK waiting for pod/some-name-rs0-1 to be ready...........OK waiting for pod/some-name-rs0-2 to be ready...........OK Waiting for cluster readyness...................... waiting for pod/some-name-cfg-0 to be ready.OK waiting for pod/some-name-cfg-1 to be ready.OK waiting for pod/some-name-cfg-2 to be ready.OK waiting for pod/some-name-mongos-0 to be ready.OK waiting for pod/some-name-mongos-1 to be ready.OK waiting for pod/some-name-mongos-2 to be ready.OK Waiting for cluster readyness ----------------------------------------------------------------------------------- enabling servicePerPod for mongos ----------------------------------------------------------------------------------- perconaservermongodb.psmdb.percona.com/some-name patched waiting for pod/some-name-mongos-0 to be ready.OK waiting for pod/some-name-mongos-1 to be ready.OK waiting for pod/some-name-mongos-2 to be ready.OK Waiting for cluster readyness check that some-name-mongos-0 was created.OK check that some-name-mongos-1 was created.OK check that some-name-mongos-2 was created.OK check that some-name-mongos was removed.OK ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Delete psmdb-backup ----------------------------------------------------------------------------------- No resources found in service-per-pod-28260 namespace. ----------------------------------------------------------------------------------- get and delete old CRDs and RBAC ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io "perconaservermongodbbackups.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbrestores.psmdb.percona.com" deleted customresourcedefinition.apiextensions.k8s.io "perconaservermongodbs.psmdb.percona.com" deleted grep: warning: stray \ before - grep: warning: stray \ before - error: the server doesn't have a resource type "perconaservermongodbbackups" + kubectl patch perconaservermongodbbackups.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbbackups" error: the server doesn't have a resource type "perconaservermongodbrestores" + kubectl patch perconaservermongodbrestores.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: the server doesn't have a resource type "perconaservermongodbrestores" No resources found + kubectl patch perconaservermongodbs.psmdb.percona.com -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: resource(s) were provided, but no name was specified clusterrole.rbac.authorization.k8s.io "percona-server-mongodb-operator" deleted clusterrolebinding.rbac.authorization.k8s.io "service-account-percona-server-mongodb-operator" deleted error: unable to read URL "https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml", server reported 504 Gateway Timeout, status code=504 error: unable to read URL "https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml", server reported 504 Gateway Timeout, status code=504 error: unable to read URL "https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml", server reported 504 Gateway Timeout, status code=504 error: unable to read URL "https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml", server reported 504 Gateway Timeout, status code=504 ----------------------------------------------------------------------------------- test passed -----------------------------------------------------------------------------------