Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-1706/e2e-tests/logs/cross-site-8-0.log WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.30) and server (1.26) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- Create source cluster ----------------------------------------------------------------------------------- E0502 01:50:26.052753 4355 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:26.166922 4355 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:26.274234 4355 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:26.379750 4355 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:26.493517 4355 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E0502 01:50:27.639064 4379 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:27.860349 4379 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:27.965608 4379 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:28.070888 4379 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:28.395928 4379 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:28.598970 4379 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:28.705847 4379 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" E0502 01:50:29.780022 4490 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:30.090045 4490 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:30.195493 4490 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:30.300909 4490 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:30.628321 4490 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:30.826155 4490 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:30.933789 4490 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" E0502 01:50:32.043158 4618 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:32.360176 4618 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:32.467418 4618 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:32.573410 4618 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:33.015162 4618 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:33.123859 4618 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:33.233645 4618 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc-backup" E0502 01:50:34.383893 4835 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:34.696487 4835 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:34.801950 4835 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:34.907518 4835 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:35.244682 4835 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:35.441118 4835 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:35.548500 4835 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc-restore" E0502 01:50:37.208439 5213 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:37.424454 5213 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:37.529447 5213 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:37.634797 5213 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:38.977757 5307 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:39.085756 5307 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:39.192483 5307 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:39.299067 5307 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0502 01:50:40.481635 5437 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:40.698790 5437 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:40.805143 5437 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:40.911661 5437 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:42.106304 5543 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:42.318186 5543 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:42.424370 5543 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:42.533073 5543 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0502 01:50:43.660509 5683 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:43.963665 5683 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:44.069303 5683 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:44.174585 5683 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:45.529539 5859 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:45.842442 5859 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:45.950811 5859 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:46.058367 5859 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0502 01:50:47.262099 6045 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:47.489427 6045 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:48.965308 6194 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:49.272554 6194 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:49.377729 6194 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:49.483063 6194 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:51.055070 6505 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:51.275531 6505 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:51.381809 6505 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:51.487468 6505 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0502 01:50:52.977944 6765 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:53.194590 6765 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:53.301496 6765 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:53.409099 6765 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:55.012123 6975 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:55.237079 6975 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:55.343512 6975 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:55.449283 6975 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E0502 01:50:56.544864 7238 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:56.859401 7238 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:56.965949 7238 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:57.072304 7238 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:58.338179 7345 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:58.649930 7345 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:58.755028 7345 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:50:58.861235 7345 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- E0502 01:51:00.049616 7537 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:00.272846 7537 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:00.378629 7537 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:00.484545 7537 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): namespaces "pxc-operator" not found namespace/pxc-operator - E0502 01:51:00.013405 7545 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:00.329349 7545 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:00.436721 7545 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:00.543966 7545 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:02.027791 7788 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:02.247112 7788 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:02.353894 7788 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- E0502 01:51:02.461417 7788 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1706-347969f5-1-cluster3" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied E0502 01:51:05.062935 8137 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:05.256593 8137 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created E0502 01:51:08.413122 8293 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:08.623849 8293 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created E0502 01:51:12.135034 8509 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0502 01:51:12.245219 8509 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request pod/percona-xtradb-cluster-operator-795676b7f9-ps9sc condition met pod/percona-xtradb-cluster-operator-795676b7f9-ps9sc condition met percona-xtradb-cluster-operator-795676b7f9-ps9sc.Ok error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces cross-site-24628 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "cross-site-24628" not found namespace/cross-site-24628 - Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted Error from server (NotFound): namespaces "cross-site-24628" not found ----------------------------------------------------------------------------------- create namespace cross-site-24628 ----------------------------------------------------------------------------------- namespace/cross-site-24628 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1706-347969f5-1-cluster3" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created "hashicorp" already exists with the same configuration, skipping "minio" already exists with the same configuration, skipping Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "hashicorp" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found NAME: minio-service LAST DEPLOYED: Thu May 2 01:51:48 2024 NAMESPACE: cross-site-24628 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.cross-site-24628.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace cross-site-24628 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace cross-site-24628 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace cross-site-24628 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace cross-site-24628 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local pod/minio-service-76ffcfd45-5d6ng condition met minio-service-76ffcfd45-5d6ng.Ok make_bucket: operator-testing pod "aws-cli" deleted If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: unable to upgrade connection: container aws-cli not found in pod aws-cli_cross-site-24628 ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret unchanged secret/aws-s3-secret unchanged secret/gcp-cs-secret unchanged secret/azure-secret unchanged ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created secret/some-name-ssl created secret/some-name-ssl-internal created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/cross-site-source created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "cross-site-source-haproxy-0" not found cross-site-source-haproxy-0.......................................Defaulted container "haproxy" out of: haproxy, pxc-monit, pxc-init (init) .Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/cross-site-source-pxc-0 condition met cross-site-source-pxc-0.Ok pod/cross-site-source-pxc-1 condition met cross-site-source-pxc-1.Ok pod/cross-site-source-pxc-2 condition met cross-site-source-pxc-2.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok Unable to use a TTY - input is not a terminal or the right kind of file ----------------------------------------------------------------------------------- get main cluster services endpoints ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok ----------------------------------------------------------------------------------- patch source cluster with replicationChannels settings ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-source patched ----------------------------------------------------------------------------------- patch main cluster secrets with replication user ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- write data to source cluster ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok ----------------------------------------------------------------------------------- take backup of source cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- make backup backup-minio-source ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/backup-minio-source created backup-minio-source...............................Succeeded ----------------------------------------------------------------------------------- create replica cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces cross-site-replica-10252 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "cross-site-replica-10252" not found namespace/cross-site-replica-10252 - Error from server (NotFound): namespaces "cross-site-replica-10252" not found ----------------------------------------------------------------------------------- create namespace cross-site-replica-10252 ----------------------------------------------------------------------------------- namespace/cross-site-replica-10252 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1706-347969f5-1-cluster3" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created error: timed out waiting for the condition on pods/percona-xtradb-cluster-operator-795676b7f9-tj86c pod/percona-xtradb-cluster-operator-795676b7f9-ps9sc condition met percona-xtradb-cluster-operator-795676b7f9-ps9sc.Ok secret/cross-site-replica-ssl-internal created ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created secret/some-name-ssl created secret/some-name-ssl-internal created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/cross-site-replica created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "cross-site-replica-haproxy-0" not found cross-site-replica-haproxy-0..................................Defaulted container "haproxy" out of: haproxy, pxc-monit, pxc-init (init) .Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/cross-site-replica-pxc-0 condition met cross-site-replica-pxc-0.Ok pod/cross-site-replica-pxc-1 condition met cross-site-replica-pxc-1.Ok pod/cross-site-replica-pxc-2 condition met cross-site-replica-pxc-2.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok Unable to use a TTY - input is not a terminal or the right kind of file ----------------------------------------------------------------------------------- restore backup from source cluster ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/backup-minio created ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness ----------------------------------------------------------------------------------- get replica cluster services endpoints ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok ----------------------------------------------------------------------------------- patch replica cluster with replicationChannels settings ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-replica patched ----------------------------------------------------------------------------------- patch replica cluster secrets with replication user ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Check replication works between source -> replica ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok ----------------------------------------------------------------------------------- make backup backup-minio-replica ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/backup-minio-replica created backup-minio-replica..............Succeeded ----------------------------------------------------------------------------------- Switch clusters over ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1706-347969f5-1-cluster3" modified. ----------------------------------------------------------------------------------- rebuild source cluster ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/backup-minio created ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok ----------------------------------------------------------------------------------- configure old replica as source ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com/cross-site-replica patched ----------------------------------------------------------------------------------- configure old source as replica ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-source patched perconaxtradbcluster.pxc.percona.com/cross-site-source patched Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1706-347969f5-1-cluster3" modified. ----------------------------------------------------------------------------------- Write data to replica cluster ----------------------------------------------------------------------------------- pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok pod/pxc-client-6644d8898f-tsq6t condition met pxc-client-6644d8898f-tsq6t.Ok ----------------------------------------------------------------------------------- Check replication works between replica -> source ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1706-347969f5-1-cluster3" modified. pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok pod/pxc-client-6644d8898f-2vhdh condition met pxc-client-6644d8898f-2vhdh.Ok ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- + kubectl patch pxc -n cross-site-24628 cross-site-source --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-source patched + kubectl patch pxc -n cross-site-replica-10252 cross-site-replica --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com "cross-site-source" deleted perconaxtradbcluster.pxc.percona.com "cross-site-replica" deleted perconaxtradbclusterbackup.pxc.percona.com "backup-minio-source" deleted perconaxtradbclusterbackup.pxc.percona.com "backup-minio-replica" deleted perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "percona-xtradbcluster-webhook" deleted ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- No resources found + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: resource(s) were provided, but no name was specified No resources found No resources found No resources found Error from server (NotFound): validatingwebhookconfigurations.admissionregistration.k8s.io "percona-xtradbcluster-webhook" not found ----------------------------------------------------------------------------------- test passed ----------------------------------------------------------------------------------- namespace "pxc-operator" force deleted Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. namespace "pxc-operator" force deleted Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.