Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-2227/e2e-tests/logs/upgrade-haproxy-5-7.log Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- deploy cert manager ----------------------------------------------------------------------------------- namespace/cert-manager created namespace/cert-manager labeled namespace/cert-manager configured customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-cluster-view created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-tokenrequest created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-cert-manager-tokenrequest created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager-cainjector created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created Warning: resource namespaces/cert-manager is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "pxc-operator" not found waiting for namespace/pxc-operator to be deletednamespace "cert-manager" deleted Error from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2227-2da0e2db-1-cluster6" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created pod/percona-xtradb-cluster-operator-85f65db574-52zh8 condition met waiting for pod/percona-xtradb-cluster-operator-85f65db574-52zh8 to become Ready.Ok error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces upgrade-haproxy-12862 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "upgrade-haproxy-12862" not found waiting for namespace/upgrade-haproxy-12862 to be deletederror: resource(s) were provided, but no name was specified Error from server (NotFound): namespaces "upgrade-haproxy-12862" not found ----------------------------------------------------------------------------------- create namespace upgrade-haproxy-12862 ----------------------------------------------------------------------------------- namespace/upgrade-haproxy-12862 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2227-2da0e2db-1-cluster6" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created "hashicorp" already exists with the same configuration, skipping "minio" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "percona" chart repository ...Successfully got an update from the "hashicorp" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found NAME: minio-service LAST DEPLOYED: Fri Oct 31 17:34:26 2025 NAMESPACE: upgrade-haproxy-12862 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.upgrade-haproxy-12862.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace upgrade-haproxy-12862 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace upgrade-haproxy-12862 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace upgrade-haproxy-12862 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace upgrade-haproxy-12862 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local pod/minio-service-55fcc5d75f-q9xs8 condition met waiting for pod/minio-service-55fcc5d75f-q9xs8 to become Ready.Ok make_bucket: operator-testing pod "aws-cli" deleted from upgrade-haproxy-12862 namespace All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt. If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: Internal error occurred: Internal error occurred: error attaching to container: container is in CONTAINER_EXITED state ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/upgrade-haproxy created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "upgrade-haproxy-haproxy-0" not found waiting for pod/upgrade-haproxy-haproxy-0 to become Ready......................Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/upgrade-haproxy-pxc-0 condition met waiting for pod/upgrade-haproxy-pxc-0 to become Ready.Ok pod/upgrade-haproxy-pxc-1 condition met waiting for pod/upgrade-haproxy-pxc-1 to become Ready.Ok pod/upgrade-haproxy-pxc-2 condition met waiting for pod/upgrade-haproxy-pxc-2 to become Ready.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- Unable to use a TTY - input is not a terminal or the right kind of file pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok Unable to use a TTY - input is not a terminal or the right kind of file [2025-10-31T17:40:50+0000] run pxc-backup/on-demand-backup-minio perconaxtradbclusterbackup.pxc.percona.com/on-demand-backup-minio created waiting for pxc-backup/on-demand-backup-minio to reach Succeeded state.........................Succeeded ----------------------------------------------------------------------------------- upgrade operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator patched deployment "percona-xtradb-cluster-operator" successfully rolled out ----------------------------------------------------------------------------------- wait for operator upgrade ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2227-2da0e2db-1-cluster6" modified. pod/percona-xtradb-cluster-operator-58654ff4d9-qd4vm condition met waiting for pod/percona-xtradb-cluster-operator-58654ff4d9-qd4vm to become Ready.Ok Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2227-2da0e2db-1-cluster6" modified. ----------------------------------------------------------------------------------- check images and generation after operator upgrade ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for pxc/upgrade-haproxy to be ready ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/upgrade-haproxy-pxc-0 condition met waiting for pod/upgrade-haproxy-pxc-0 to become Ready.Ok pod/upgrade-haproxy-pxc-1 condition met waiting for pod/upgrade-haproxy-pxc-1 to become Ready.Ok pod/upgrade-haproxy-pxc-2 condition met waiting for pod/upgrade-haproxy-pxc-2 to become Ready.Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok ----------------------------------------------------------------------------------- patch pxc images and upgrade ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/upgrade-haproxy patched ----------------------------------------------------------------------------------- check images and generation after full upgrade ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for pxc/upgrade-haproxy to be ready........................ ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/upgrade-haproxy-pxc-0 condition met waiting for pod/upgrade-haproxy-pxc-0 to become Ready.Ok pod/upgrade-haproxy-pxc-1 condition met waiting for pod/upgrade-haproxy-pxc-1 to become Ready.Ok pod/upgrade-haproxy-pxc-2 condition met waiting for pod/upgrade-haproxy-pxc-2 to become Ready.Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-65c4d67b5b-ht7sx condition met waiting for pod/pxc-client-65c4d67b5b-ht7sx to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok --- /mnt/jenkins/workspace/cloud-pxc-operator_PR-2227/e2e-tests/upgrade-haproxy/compare/select-1.sql 2025-10-31 15:50:08.288697309 +0000 +++ /tmp/tmp.1khrlbCBbS/select-1.sql 2025-10-31 17:46:33.201587284 +0000 @@ -1 +1,2 @@ -100500 +ERROR 2003 (HY000): Can't connect to MySQL server on 'upgrade-haproxy-pxc-1.upgrade-haproxy-pxc' (111) +command terminated with exit code 1