Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-2036/e2e-tests/logs/monitoring-pmm3-8-0.log WARNING: version difference between client (1.33) and server (1.31) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.33) and server (1.31) exceeds the supported minor version skew of +/-1 No resources found + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: resource(s) were provided, but no name was specified No resources found No resources found No resources found error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "pxc-operator" not found waiting for namespace/pxc-operator to be deletederror: resource(s) were provided, but no name was specified Error from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2036-c42c1c6c-4-cluster8" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created pod/percona-xtradb-cluster-operator-779b89dbf-f58bs condition met pod/percona-xtradb-cluster-operator-779b89dbf-f58bs condition met waiting for pod/percona-xtradb-cluster-operator-779b89dbf-f58bs to become Ready.Ok error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces monitoring-pmm3-13861 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "monitoring-pmm3-13861" not found waiting for namespace/monitoring-pmm3-13861 to be deletedError from server (NotFound): namespaces "monitoring-pmm3-13861" not found ----------------------------------------------------------------------------------- create namespace monitoring-pmm3-13861 ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified namespace/monitoring-pmm3-13861 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-2036-c42c1c6c-4-cluster8" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created "hashicorp" already exists with the same configuration, skipping "minio" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "hashicorp" chart repository ...Successfully got an update from the "percona" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install PMM Server ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: monitoring: release: not found "percona" has been removed from your repositories "percona" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "hashicorp" chart repository ...Successfully got an update from the "percona" chart repository Update Complete. ⎈Happy Helming!⎈ NAME: monitoring LAST DEPLOYED: Tue Jul 29 20:01:13 2025 NAMESPACE: monitoring-pmm3-13861 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Percona Monitoring and Management (PMM) An open source database monitoring, observability and management tool Check more info here: https://docs.percona.com/percona-monitoring-and-management/index.html Get the application URL: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get --namespace monitoring-pmm3-13861 svc -w monitoring-service' export SERVICE_IP=$(kubectl get svc --namespace monitoring-pmm3-13861 monitoring-service -o jsonpath="{.status.loadBalancer.ingress[0].ip}") echo https://$SERVICE_IP: Get password for the "admin" user: export ADMIN_PASS=$(kubectl get secret pmm-secret --namespace monitoring-pmm3-13861 -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode) echo $ADMIN_PASS pod/monitoring-0 condition met ----------------------------------------------------------------------------------- create secret ----------------------------------------------------------------------------------- secret/my-cluster-secrets created ----------------------------------------------------------------------------------- add PMM3 token to secret ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- create PXC cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/monitoring created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "monitoring-haproxy-0" not found waiting for pod/monitoring-haproxy-0 to become Ready....................................Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/monitoring-pxc-0 condition met waiting for pod/monitoring-pxc-0 to become Ready.Ok pod/monitoring-pxc-1 condition met waiting for pod/monitoring-pxc-1 to become Ready.Ok pod/monitoring-pxc-2 condition met waiting for pod/monitoring-pxc-2 to become Ready.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-59944c5bbf-5rzwv condition met waiting for pod/pxc-client-59944c5bbf-5rzwv to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-5rzwv condition met waiting for pod/pxc-client-59944c5bbf-5rzwv to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-5rzwv condition met waiting for pod/pxc-client-59944c5bbf-5rzwv to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-5rzwv condition met waiting for pod/pxc-client-59944c5bbf-5rzwv to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok pod/pxc-client-59944c5bbf-5rzwv condition met waiting for pod/pxc-client-59944c5bbf-5rzwv to become ReadyDefaulted container "pxc-client" out of: pxc-client, backup .Ok Unable to use a TTY - input is not a terminal or the right kind of file Waiting for sts/monitoring-pxc to reach generation 1... Resource sts/monitoring-pxc has reached generation 1. Waiting for sts/monitoring-haproxy to reach generation 1... Resource sts/monitoring-haproxy has reached generation 1. pod/monitoring-haproxy-0 condition met pod/monitoring-haproxy-1 condition met pod/monitoring-pxc-0 condition met pod/monitoring-pxc-1 condition met pod/monitoring-pxc-2 condition met ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for pxc/monitoring to be ready ----------------------------------------------------------------------------------- compare statefulset/monitoring-pxc--no-prefix ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/monitoring-haproxy--no-prefix ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- apply my-env-var-secrets to add PMM_PREFIX ----------------------------------------------------------------------------------- secret/my-env-var-secrets created Waiting for sts/monitoring-pxc to reach generation 2... Resource sts/monitoring-pxc is at generation 1. Waiting... Resource sts/monitoring-pxc has reached generation 2. Waiting for sts/monitoring-haproxy to reach generation 2... Resource sts/monitoring-haproxy has reached generation 2. ----------------------------------------------------------------------------------- create new PMM token and add it to the secret ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- delete old PMM token ----------------------------------------------------------------------------------- Waiting for sts/monitoring-pxc to reach generation 3... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc is at generation 2. Waiting... Resource sts/monitoring-pxc has reached generation 3. Waiting for sts/monitoring-haproxy to reach generation 3... Resource sts/monitoring-haproxy has reached generation 3. pod/monitoring-haproxy-0 condition met pod/monitoring-haproxy-1 condition met pod/monitoring-pxc-0 condition met pod/monitoring-pxc-1 condition met pod/monitoring-pxc-2 condition met ----------------------------------------------------------------------------------- check if pmm-client container enabled ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/monitoring-pxc- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- compare statefulset/monitoring-haproxy- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- return true if kubernetes version equal or greater than desired ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- check mysql metrics ----------------------------------------------------------------------------------- jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) jq: error (at :0): Cannot iterate over null (null) ----------------------------------------------------------------------------------- check haproxy metrics ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- check QAN data ----------------------------------------------------------------------------------- "" ----------------------------------------------------------------------------------- verify that the custom cluster name is configured ----------------------------------------------------------------------------------- command terminated with exit code 1 command terminated with exit code 1 command terminated with exit code 1 perconaxtradbcluster.pxc.percona.com/monitoring patched waiting for pod/monitoring-pxc-0 to be deleted................Error from server (NotFound): pods "monitoring-pxc-0" not found command terminated with exit code 1 command terminated with exit code 1 command terminated with exit code 1 release "monitoring" uninstalled ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- + kubectl patch pxc -n monitoring-pmm3-13861 monitoring --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/monitoring patched (no change) perconaxtradbcluster.pxc.percona.com "monitoring" deleted No resources found No resources found validatingwebhookconfiguration.admissionregistration.k8s.io "percona-xtradbcluster-webhook" deleted ----------------------------------------------------------------------------------- test passed -----------------------------------------------------------------------------------