Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-1755/e2e-tests/logs/cross-site-8-0.log WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- Create source cluster ----------------------------------------------------------------------------------- E0709 13:20:26.937260 5837 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:27.136847 5837 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:27.245418 5837 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:27.351178 5837 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:27.457642 5837 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E0709 13:20:29.484121 6164 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:29.793031 6164 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:29.899444 6164 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:30.007394 6164 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:30.331740 6164 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:30.556675 6164 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:30.670654 6164 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" E0709 13:20:33.035180 6630 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:33.350765 6630 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:33.456867 6630 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:33.563081 6630 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:33.889438 6630 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:34.099162 6630 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:34.215946 6630 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" E0709 13:20:35.749080 7154 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:35.971985 7154 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:36.080997 7154 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:36.187505 7154 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:36.518840 7154 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:36.732131 7154 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:36.842737 7154 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc-backup" E0709 13:20:37.994139 7540 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:38.245458 7540 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:38.409007 7540 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:38.525774 7540 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:38.919432 7540 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:39.103144 7540 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:39.212835 7540 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc-restore" E0709 13:20:42.395125 8153 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:42.618537 8153 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:42.725237 8153 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:42.831082 8153 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:44.353104 8518 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:44.663645 8518 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:44.769721 8518 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:44.875512 8518 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0709 13:20:46.656109 8760 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:46.974152 8760 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:47.080657 8760 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:47.187099 8760 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:48.921833 9122 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:49.148504 9122 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:49.255958 9122 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:49.364096 9122 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0709 13:20:50.900963 9468 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:51.208984 9468 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:51.315322 9468 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:51.421157 9468 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:52.723071 9743 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:52.947144 9743 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:53.056506 9743 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:53.162906 9743 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0709 13:20:55.000947 9954 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:55.310555 9954 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:57.129413 10258 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:57.447454 10258 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:57.554003 10258 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:57.660523 10258 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:59.361004 10677 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:59.579089 10677 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:59.684881 10677 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:20:59.791472 10677 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0709 13:21:01.681550 11072 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:01.899638 11072 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:02.006137 11072 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:02.112250 11072 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:03.969697 11374 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:04.289638 11374 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:04.395899 11374 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0709 13:21:04.502573 11374 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "pxc-operator" not found namespace/pxc-operator - Error from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1755-37d8b781-1-cluster3" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created pod/percona-xtradb-cluster-operator-56b5fcf69d-5z64v condition met pod/percona-xtradb-cluster-operator-56b5fcf69d-5z64v condition met percona-xtradb-cluster-operator-56b5fcf69d-5z64v.Ok error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces cross-site-10355 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "cross-site-10355" not found namespace/cross-site-10355 - Error from server (NotFound): namespaces "cross-site-10355" not found ----------------------------------------------------------------------------------- create namespace cross-site-10355 ----------------------------------------------------------------------------------- Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted namespace/cross-site-10355 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1755-37d8b781-1-cluster3" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created "hashicorp" already exists with the same configuration, skipping "minio" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "hashicorp" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found NAME: minio-service LAST DEPLOYED: Tue Jul 9 13:22:05 2024 NAMESPACE: cross-site-10355 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.cross-site-10355.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace cross-site-10355 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace cross-site-10355 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace cross-site-10355 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace cross-site-10355 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local pod/minio-service-76ffcfd45-cb5rh condition met minio-service-76ffcfd45-cb5rh.Ok make_bucket: operator-testing pod "aws-cli" deleted If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: Internal error occurred: error attaching to container: container is in CONTAINER_EXITED state ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret unchanged secret/aws-s3-secret unchanged secret/gcp-cs-secret unchanged secret/azure-secret unchanged ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created secret/some-name-ssl created secret/some-name-ssl-internal created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/cross-site-source created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "cross-site-source-haproxy-0" not found cross-site-source-haproxy-0.......................................Defaulted container "haproxy" out of: haproxy, pxc-monit, pxc-init (init) .Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/cross-site-source-pxc-0 condition met cross-site-source-pxc-0.Ok pod/cross-site-source-pxc-1 condition met cross-site-source-pxc-1.Ok pod/cross-site-source-pxc-2 condition met cross-site-source-pxc-2.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok Unable to use a TTY - input is not a terminal or the right kind of file ----------------------------------------------------------------------------------- get main cluster services endpoints ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok ----------------------------------------------------------------------------------- patch source cluster with replicationChannels settings ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-source patched ----------------------------------------------------------------------------------- patch main cluster secrets with replication user ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- write data to source cluster ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok pod/pxc-client-6644d8898f-mfg9r condition met pxc-client-6644d8898f-mfg9r.Ok ----------------------------------------------------------------------------------- take backup of source cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- make backup backup-minio-source ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/backup-minio-source created backup-minio-source................Succeeded ----------------------------------------------------------------------------------- create replica cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces cross-site-replica-11112 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "cross-site-replica-11112" not found namespace/cross-site-replica-11112 - Error from server (NotFound): namespaces "cross-site-replica-11112" not found ----------------------------------------------------------------------------------- create namespace cross-site-replica-11112 ----------------------------------------------------------------------------------- namespace/cross-site-replica-11112 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1755-37d8b781-1-cluster3" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created error: timed out waiting for the condition on pods/percona-xtradb-cluster-operator-56b5fcf69d-z9t5r pod/percona-xtradb-cluster-operator-56b5fcf69d-5z64v condition met percona-xtradb-cluster-operator-56b5fcf69d-5z64v.Ok secret/cross-site-replica-ssl-internal created ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created secret/some-name-ssl created secret/some-name-ssl-internal created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/cross-site-replica created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "cross-site-replica-haproxy-0" not found cross-site-replica-haproxy-0........................................Defaulted container "haproxy" out of: haproxy, pxc-monit, pxc-init (init) .Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/cross-site-replica-pxc-0 condition met cross-site-replica-pxc-0.Ok pod/cross-site-replica-pxc-1 condition met cross-site-replica-pxc-1.Ok pod/cross-site-replica-pxc-2 condition met cross-site-replica-pxc-2.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok Unable to use a TTY - input is not a terminal or the right kind of file ----------------------------------------------------------------------------------- restore backup from source cluster ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/backup-minio created ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness ----------------------------------------------------------------------------------- get replica cluster services endpoints ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok ----------------------------------------------------------------------------------- patch replica cluster with replicationChannels settings ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-replica patched ----------------------------------------------------------------------------------- patch replica cluster secrets with replication user ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Check replication works between source -> replica ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok ----------------------------------------------------------------------------------- make backup backup-minio-replica ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/backup-minio-replica created backup-minio-replica....................................................................................................Succeeded ----------------------------------------------------------------------------------- Switch clusters over ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1755-37d8b781-1-cluster3" modified. ----------------------------------------------------------------------------------- rebuild source cluster ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/backup-minio created ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness pod/pxc-client-6644d8898f-6h6b6 condition met pxc-client-6644d8898f-6h6b6.Ok pod/pxc-client-6644d8898f-6h6b6 condition met pxc-client-6644d8898f-6h6b6.Ok ----------------------------------------------------------------------------------- configure old replica as source ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com/cross-site-replica patched ----------------------------------------------------------------------------------- configure old source as replica ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-source patched perconaxtradbcluster.pxc.percona.com/cross-site-source patched Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1755-37d8b781-1-cluster3" modified. ----------------------------------------------------------------------------------- Write data to replica cluster ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok pod/pxc-client-6644d8898f-9cc6r condition met pxc-client-6644d8898f-9cc6r.Ok ----------------------------------------------------------------------------------- Check replication works between replica -> source ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1755-37d8b781-1-cluster3" modified. pod/pxc-client-6644d8898f-6h6b6 condition met pxc-client-6644d8898f-6h6b6.Ok pod/pxc-client-6644d8898f-6h6b6 condition met pxc-client-6644d8898f-6h6b6.Ok pod/pxc-client-6644d8898f-6h6b6 condition met pxc-client-6644d8898f-6h6b6.Ok ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- + kubectl patch pxc -n cross-site-10355 cross-site-source --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-source patched + kubectl patch pxc -n cross-site-replica-11112 cross-site-replica --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com "cross-site-source" deleted perconaxtradbcluster.pxc.percona.com "cross-site-replica" deleted perconaxtradbclusterbackup.pxc.percona.com "backup-minio-source" deleted perconaxtradbclusterbackup.pxc.percona.com "backup-minio-replica" deleted perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "percona-xtradbcluster-webhook" deleted ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- No resources found + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: resource(s) were provided, but no name was specified No resources found No resources found No resources found Error from server (NotFound): validatingwebhookconfigurations.admissionregistration.k8s.io "percona-xtradbcluster-webhook" not found ----------------------------------------------------------------------------------- test passed ----------------------------------------------------------------------------------- namespace "pxc-operator" force deleted namespace "pxc-operator" force deleted Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.