Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-2234/e2e-tests/logs/monitoring-pmm3-8-0.log Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 Warning: version difference between client (1.34) and server (1.31) exceeds the supported minor version skew of +/-1 + kubectl patch pxc -n cross-site-16423 cross-site-source --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-source patched + kubectl patch pxc -n cross-site-replica-26426 cross-site-replica --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com "cross-site-source" deleted from cross-site-16423 namespace perconaxtradbcluster.pxc.percona.com "cross-site-replica" deleted from cross-site-replica-26426 namespace perconaxtradbclusterbackup.pxc.percona.com "backup-minio-source" deleted from cross-site-16423 namespace perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted from cross-site-replica-26426 namespace error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- namespace "cross-site-16423" deleted namespace "cross-site-replica-26426" deleted namespace "pxc-operator" deleted waiting for namespace/pxc-operator to be deletedError from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2234-269f3694-2-cluster4" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created pod/percona-xtradb-cluster-operator-b5f9c4897-95bhn condition met pod/percona-xtradb-cluster-operator-b5f9c4897-95bhn condition met waiting for pod/percona-xtradb-cluster-operator-b5f9c4897-95bhn to become Ready.Ok error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces monitoring-pmm3-11923 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified Error from server (NotFound): namespaces "monitoring-pmm3-11923" not found waiting for namespace/monitoring-pmm3-11923 to be deletedError from server (NotFound): namespaces "monitoring-pmm3-11923" not found ----------------------------------------------------------------------------------- create namespace monitoring-pmm3-11923 ----------------------------------------------------------------------------------- namespace/monitoring-pmm3-11923 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2234-269f3694-2-cluster4" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created "hashicorp" already exists with the same configuration, skipping "minio" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "percona" chart repository ...Successfully got an update from the "hashicorp" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install PMM Server ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: monitoring: release: not found "percona" has been removed from your repositories "percona" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "percona" chart repository ...Successfully got an update from the "hashicorp" chart repository Update Complete. ⎈Happy Helming!⎈ NAME: monitoring LAST DEPLOYED: Wed Nov 12 00:14:21 2025 NAMESPACE: monitoring-pmm3-11923 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Percona Monitoring and Management (PMM) An open source database monitoring, observability and management tool Check more info here: https://docs.percona.com/percona-monitoring-and-management/index.html Get the application URL: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get --namespace monitoring-pmm3-11923 svc -w monitoring-service' export SERVICE_IP=$(kubectl get svc --namespace monitoring-pmm3-11923 monitoring-service -o jsonpath="{.status.loadBalancer.ingress[0].ip}") echo https://$SERVICE_IP: Get password for the "admin" user: export ADMIN_PASS=$(kubectl get secret pmm-secret --namespace monitoring-pmm3-11923 -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode) echo $ADMIN_PASS pod/monitoring-0 condition met ----------------------------------------------------------------------------------- create secret ----------------------------------------------------------------------------------- secret/my-cluster-secrets created ----------------------------------------------------------------------------------- add PMM3 token to secret ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- create PXC cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/monitoring created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- pod/monitoring-haproxy-0 condition met pod/monitoring-pxc-0 condition met ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/monitoring-haproxy-0 condition met waiting for pod/monitoring-haproxy-0 to become Ready.Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/monitoring-pxc-0 condition met waiting for pod/monitoring-pxc-0 to become Ready.Ok pod/monitoring-pxc-1 condition met waiting for pod/monitoring-pxc-1 to become Ready.Ok pod/monitoring-pxc-2 condition met waiting for pod/monitoring-pxc-2 to become Ready.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-59944c5bbf-ktszr condition met waiting for pod/pxc-client-59944c5bbf-ktszr to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-ktszr condition met waiting for pod/pxc-client-59944c5bbf-ktszr to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-ktszr condition met waiting for pod/pxc-client-59944c5bbf-ktszr to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-ktszr condition met waiting for pod/pxc-client-59944c5bbf-ktszr to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-ktszr condition met waiting for pod/pxc-client-59944c5bbf-ktszr to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok Unable to use a TTY - input is not a terminal or the right kind of file Waiting for sts/monitoring-pxc to reach generation 1... Resource sts/monitoring-pxc has reached generation 1. Waiting for sts/monitoring-haproxy to reach generation 1... Resource sts/monitoring-haproxy has reached generation 1. pod/monitoring-haproxy-0 condition met pod/monitoring-haproxy-1 condition met pod/monitoring-pxc-0 condition met pod/monitoring-pxc-1 condition met pod/monitoring-pxc-2 condition met ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for pxc/monitoring to be ready ----------------------------------------------------------------------------------- compare statefulset/monitoring-pxc--no-prefix ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/monitoring-haproxy--no-prefix ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- apply my-env-var-secrets to add PMM_PREFIX ----------------------------------------------------------------------------------- secret/my-env-var-secrets created Waiting for sts/monitoring-pxc to reach generation 2... Resource sts/monitoring-pxc is at generation 1. Waiting... Resource sts/monitoring-pxc has reached generation 2. Waiting for sts/monitoring-haproxy to reach generation 2... Resource sts/monitoring-haproxy has reached generation 2. ----------------------------------------------------------------------------------- create new PMM token and add it to the secret ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- delete old PMM token ----------------------------------------------------------------------------------- Waiting for sts/monitoring-pxc to reach generation 3... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc has reached generation 3. Waiting for sts/monitoring-haproxy to reach generation 3... Resource sts/monitoring-haproxy has reached generation 3. pod/monitoring-haproxy-0 condition met pod/monitoring-haproxy-1 condition met pod/monitoring-pxc-0 condition met pod/monitoring-pxc-1 condition met pod/monitoring-pxc-2 condition met ----------------------------------------------------------------------------------- check if pmm-client container enabled ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/monitoring-pxc- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/monitoring-haproxy- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- check mysql metrics ----------------------------------------------------------------------------------- jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) ----------------------------------------------------------------------------------- check haproxy metrics ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- check QAN data ----------------------------------------------------------------------------------- "" ----------------------------------------------------------------------------------- verify that the custom cluster name is configured ----------------------------------------------------------------------------------- command terminated with exit code 1 command terminated with exit code 1 command terminated with exit code 1 perconaxtradbcluster.pxc.percona.com/monitoring patched waiting for pod/monitoring-pxc-0 to be deleted.................Error from server (NotFound): pods "monitoring-pxc-0" not found command terminated with exit code 1 command terminated with exit code 1 command terminated with exit code 1 release "monitoring" uninstalled ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- + kubectl patch pxc -n monitoring-pmm3-11923 monitoring --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/monitoring patched (no change) perconaxtradbcluster.pxc.percona.com "monitoring" deleted from monitoring-pmm3-11923 namespace No resources found No resources found validatingwebhookconfiguration.admissionregistration.k8s.io "percona-xtradbcluster-webhook" deleted ----------------------------------------------------------------------------------- test passed -----------------------------------------------------------------------------------