Log: /mnt/jenkins/workspace/cloud-pxc-operator_PR-1421/e2e-tests/logs/cross-site-8-0.log WARNING: version difference between client (1.29) and server (1.25) exceeds the supported minor version skew of +/-1 WARNING: version difference between client (1.29) and server (1.25) exceeds the supported minor version skew of +/-1 ----------------------------------------------------------------------------------- Create source cluster ----------------------------------------------------------------------------------- E1219 08:27:55.497562 4284 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:55.731233 4284 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:55.837245 4284 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:55.943475 4284 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:56.054637 4284 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' E1219 08:27:56.718969 4431 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:57.035248 4431 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:57.141410 4431 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:57.247490 4431 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:57.579685 4431 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:57.783215 4431 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:27:57.892483 4431 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" E1219 08:27:59.938756 4516 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:00.233681 4516 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:00.341815 4516 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:00.448976 4516 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:00.768201 4516 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:00.989142 4516 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:01.097680 4516 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc" E1219 08:28:03.074384 4899 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:03.291507 4899 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:03.397269 4899 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:03.504858 4899 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:03.828038 4899 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:04.039767 4899 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:04.149188 4899 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc-backup" E1219 08:28:06.270200 5180 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:06.493921 5180 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:06.601306 5180 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:06.707907 5180 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:07.052429 5180 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:07.260522 5180 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:07.370291 5180 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: the server doesn't have a resource type "pxc-restore" ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- E1219 08:28:11.081614 5727 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:11.338687 5727 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:13.126114 5953 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:13.360572 5953 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:13.466760 5953 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:13.573442 5953 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:15.880949 6271 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:16.107184 6271 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:16.215011 6271 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:16.321684 6271 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E1219 08:28:19.058551 6536 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:19.280952 6536 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:19.387805 6536 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:19.494638 6536 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:21.847857 6901 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:22.159113 6901 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:22.265008 6901 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:22.371022 6901 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E1219 08:28:23.963400 7131 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:24.273244 7131 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:24.379035 7131 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:24.484733 7131 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:26.839589 7455 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:27.158733 7455 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:27.265095 7455 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:27.371950 7455 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E1219 08:28:30.249912 7689 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:30.570855 7689 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:30.677077 7689 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:30.783666 7689 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:33.964908 8105 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:34.176614 8105 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:34.282964 8105 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:34.388940 8105 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E1219 08:28:36.667861 8503 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:36.890334 8503 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:36.998767 8503 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:37.107004 8503 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:39.834778 8801 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:40.147354 8801 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:40.254298 8801 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:40.359466 8801 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified E1219 08:28:43.174901 9155 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:43.401649 9155 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:43.511323 9155 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:43.617874 9155 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:46.363152 9513 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:46.684245 9513 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:46.792415 9513 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:46.901012 9513 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces pxc-operator ----------------------------------------------------------------------------------- E1219 08:28:49.787156 9860 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:49.911169 9860 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:50.023581 9860 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:50.133886 9860 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): namespaces "pxc-operator" not found namespace/pxc-operator - E1219 08:28:49.904224 9866 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:50.129700 9866 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:50.237161 9866 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1219 08:28:50.344703 9866 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request Error from server (NotFound): namespaces "pxc-operator" not found ----------------------------------------------------------------------------------- create namespace pxc-operator ----------------------------------------------------------------------------------- Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted namespace/pxc-operator created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1421-86089695-4-cluster3" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created pod/percona-xtradb-cluster-operator-5c699b45b5-5d8m8 condition met pod/percona-xtradb-cluster-operator-5c699b45b5-5d8m8 condition met percona-xtradb-cluster-operator-5c699b45b5-5d8m8.Ok ----------------------------------------------------------------------------------- destroy chaos-mesh ----------------------------------------------------------------------------------- error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified error: resource(s) were provided, but no name was specified ----------------------------------------------------------------------------------- cleaned up all old namespaces ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces cross-site-15481 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "cross-site-15481" not found namespace/cross-site-15481 - Error from server (NotFound): namespaces "cross-site-15481" not found ----------------------------------------------------------------------------------- create namespace cross-site-15481 ----------------------------------------------------------------------------------- Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted namespace/cross-site-15481 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1421-86089695-4-cluster3" modified. ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created "hashicorp" has been added to your repositories "minio" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "chaos-mesh" chart repository ...Successfully got an update from the "minio" chart repository ...Successfully got an update from the "hashicorp" chart repository Update Complete. ⎈Happy Helming!⎈ ----------------------------------------------------------------------------------- install Minio ----------------------------------------------------------------------------------- Error: uninstall: Release not loaded: minio-service: release: not found NAME: minio-service LAST DEPLOYED: Tue Dec 19 08:30:16 2023 NAMESPACE: cross-site-15481 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio-service.cross-site-15481.svc.cluster.local To access MinIO from localhost, run the below commands: 1. export POD_NAME=$(kubectl get pods --namespace cross-site-15481 -l "release=minio-service" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace cross-site-15481 Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-service-local=http://$(kubectl get secret --namespace cross-site-15481 minio-service -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace cross-site-15481 minio-service -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-service-local pod/minio-service-75dd45bdcd-dfh8n condition met minio-service-75dd45bdcd-dfh8n.Ok make_bucket: operator-testing pod "aws-cli" deleted If you don't see a command prompt, try pressing enter. warning: couldn't attach to pod/aws-cli, falling back to streaming logs: unable to upgrade connection: container aws-cli not found in pod aws-cli_cross-site-15481 ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret unchanged secret/aws-s3-secret unchanged secret/gcp-cs-secret unchanged secret/azure-secret unchanged ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created secret/some-name-ssl created secret/some-name-ssl-internal created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/cross-site-source created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "cross-site-source-haproxy-0" not found cross-site-source-haproxy-0...........................................Defaulted container "haproxy" out of: haproxy, pxc-monit, pxc-init (init) .Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/cross-site-source-pxc-0 condition met cross-site-source-pxc-0.Ok pod/cross-site-source-pxc-1 condition met cross-site-source-pxc-1.Ok pod/cross-site-source-pxc-2 condition met cross-site-source-pxc-2.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok Unable to use a TTY - input is not a terminal or the right kind of file ----------------------------------------------------------------------------------- get main cluster services endpoints ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok ----------------------------------------------------------------------------------- patch source cluster with replicationChannels settings ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-source patched ----------------------------------------------------------------------------------- patch main cluster secrets with replication user ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- write data to source cluster ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok ----------------------------------------------------------------------------------- take backup of source cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- make backup backup-minio-source ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/backup-minio-source created backup-minio-source...............Succeeded ----------------------------------------------------------------------------------- create replica cluster ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- cleaned up old namespaces cross-site-replica-25862 ----------------------------------------------------------------------------------- Error from server (NotFound): namespaces "cross-site-replica-25862" not found namespace/cross-site-replica-25862 - Error from server (NotFound): namespaces "cross-site-replica-25862" not found ----------------------------------------------------------------------------------- create namespace cross-site-replica-25862 ----------------------------------------------------------------------------------- namespace/cross-site-replica-25862 created Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1421-86089695-4-cluster3" modified. ----------------------------------------------------------------------------------- start PXC operator ----------------------------------------------------------------------------------- customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com serverside-applied customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com serverside-applied clusterrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged serviceaccount/percona-xtradb-cluster-operator created clusterrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged deployment.apps/percona-xtradb-cluster-operator created service/percona-xtradb-cluster-operator created error: timed out waiting for the condition on pods/percona-xtradb-cluster-operator-5c699b45b5-pcqh8 pod/percona-xtradb-cluster-operator-5c699b45b5-5d8m8 condition met percona-xtradb-cluster-operator-5c699b45b5-5d8m8.Ok secret/cross-site-replica-ssl-internal created ----------------------------------------------------------------------------------- create secrets for cloud storages ----------------------------------------------------------------------------------- secret/minio-secret created secret/aws-s3-secret created secret/gcp-cs-secret created secret/azure-secret created ----------------------------------------------------------------------------------- create first PXC cluster ----------------------------------------------------------------------------------- secret/my-cluster-secrets created secret/some-name-ssl created secret/some-name-ssl-internal created deployment.apps/pxc-client created perconaxtradbcluster.pxc.percona.com/cross-site-replica created ----------------------------------------------------------------------------------- check if all 3 Pods started ----------------------------------------------------------------------------------- error: no matching resources found ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- Error from server (NotFound): pods "cross-site-replica-haproxy-0" not found cross-site-replica-haproxy-0..........................Defaulted container "haproxy" out of: haproxy, pxc-monit, pxc-init (init) .Ok ----------------------------------------------------------------------------------- wait for running cluster ----------------------------------------------------------------------------------- pod/cross-site-replica-pxc-0 condition met cross-site-replica-pxc-0.Ok pod/cross-site-replica-pxc-1 condition met cross-site-replica-pxc-1.Ok pod/cross-site-replica-pxc-2 condition met cross-site-replica-pxc-2.Ok ----------------------------------------------------------------------------------- write data ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok Unable to use a TTY - input is not a terminal or the right kind of file ----------------------------------------------------------------------------------- restore backup from source cluster ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/backup-minio created ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness ----------------------------------------------------------------------------------- get replica cluster services endpoints ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok ----------------------------------------------------------------------------------- patch replica cluster with replicationChannels settings ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-replica patched ----------------------------------------------------------------------------------- patch replica cluster secrets with replication user ----------------------------------------------------------------------------------- secret/my-cluster-secrets patched ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- Check replication works between source -> replica ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok ----------------------------------------------------------------------------------- make backup backup-minio-replica ----------------------------------------------------------------------------------- perconaxtradbclusterbackup.pxc.percona.com/backup-minio-replica created backup-minio-replica.............Succeeded ----------------------------------------------------------------------------------- Switch clusters over ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1421-86089695-4-cluster3" modified. ----------------------------------------------------------------------------------- rebuild source cluster ----------------------------------------------------------------------------------- perconaxtradbclusterrestore.pxc.percona.com/backup-minio created ----------------------------------------------------------------------------------- wait cluster consistency ----------------------------------------------------------------------------------- waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness waiting for cluster readyness pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok ----------------------------------------------------------------------------------- configure old replica as source ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com/cross-site-replica patched ----------------------------------------------------------------------------------- configure old source as replica ----------------------------------------------------------------------------------- perconaxtradbcluster.pxc.percona.com/cross-site-source patched perconaxtradbcluster.pxc.percona.com/cross-site-source patched Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1421-86089695-4-cluster3" modified. ----------------------------------------------------------------------------------- Write data to replica cluster ----------------------------------------------------------------------------------- pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok pod/pxc-client-65d95667fc-f5pf6 condition met pxc-client-65d95667fc-f5pf6.Ok ----------------------------------------------------------------------------------- Check replication works between replica -> source ----------------------------------------------------------------------------------- Context "gke_cloud-dev-112233_us-central1-a_jen-pxc-1421-86089695-4-cluster3" modified. pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok pod/pxc-client-65d95667fc-92xwk condition met pxc-client-65d95667fc-92xwk.Ok ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- + kubectl patch pxc -n cross-site-15481 cross-site-source --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-source patched + kubectl patch pxc -n cross-site-replica-25862 cross-site-replica --type=merge -p '{"metadata":{"finalizers":[]}}' perconaxtradbcluster.pxc.percona.com/cross-site-replica patched perconaxtradbcluster.pxc.percona.com "cross-site-source" deleted perconaxtradbcluster.pxc.percona.com "cross-site-replica" deleted perconaxtradbclusterbackup.pxc.percona.com "backup-minio-source" deleted perconaxtradbclusterbackup.pxc.percona.com "backup-minio-replica" deleted perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted perconaxtradbclusterrestore.pxc.percona.com "backup-minio" deleted validatingwebhookconfiguration.admissionregistration.k8s.io "percona-xtradbcluster-webhook" deleted ----------------------------------------------------------------------------------- destroy cluster/operator and all other resources ----------------------------------------------------------------------------------- No resources found + kubectl patch pxc -n sh --type=merge -p '{"metadata":{"finalizers":[]}}' error: resource(s) were provided, but no name was specified No resources found No resources found No resources found Error from server (NotFound): validatingwebhookconfigurations.admissionregistration.k8s.io "percona-xtradbcluster-webhook" not found ----------------------------------------------------------------------------------- test passed ----------------------------------------------------------------------------------- namespace "pxc-operator" force deleted Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): namespaces "pxc-operator" not found