I followed the official documents to deploy two Kubernetes clusters and install istio Primary-Remote on different networks.
I find that the endpoints in the pod's outbound is the pod IP corresponding to the two clusters, which seems to mean that cross-cluster communication requests do not flow into other clusters through the address gateway, which seems to be different from the network communication mode described in the official documents.
Who can tell me why and how to deal with it?
When installing is to cluster2, I made some changes to 'istiooperator' to meet the resource configuration. I use this command to export istio-eastwestgateway in cluster1.
### cluster2's IstioOperator
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: remote
values:
global:
imagePullPolicy: "IfNotPresent"
proxy:
resources:
requests:
cpu: 0m
memory: 40Mi
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
remotePilotAddress: 192.168.79.78
components:
ingressGateways:
- name: istio-ingressgateway
k8s:
resources:
requests:
cpu: 0m
memory: 40Mi
pilot:
k8s:
env:
- name: PILOT_TRACE_SAMPLING
value: "100"
resources:
requests:
cpu: 0m
memory: 100Mi
### cluster2's eastwest-gateway
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: eastwest
spec:
profile: empty
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
topology.istio.io/network: network2
enabled: true
k8s:
resources:
requests:
cpu: "0m"
env:
# sni-dnat adds the clusters required for AUTO_PASSTHROUGH mode
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
# traffic through this gateway should be routed inside the network
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: network2
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: mtls
port: 15443
targetPort: 15443
- name: tcp-istiod
port: 15012
targetPort: 15012
- name: tcp-webhook
port: 15017
targetPort: 15017
values:
global:
#jwtPolicy: first-party-jwt
meshID: mesh1
network: network2
multiCluster:
clusterName: cluster2
After the installation of the two clusters, when I followed the document for verification, I found that the pods between different clusters could not communicate normally. The results of cross-cluster communication are as follows
[root@localhost k8s_ctx]# kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep
> "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l
> app=sleep -o jsonpath='{.items[0].metadata.name}')"
> -- curl helloworld.sample:5000/hello
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 60 100 60 0 0 674 0 --:--:-- --:--:-- --:--:-- 674
Hello version: v1, instance: helloworld-v1-5897696f47-5lsqp
[root@localhost k8s_ctx]#
[root@localhost k8s_ctx]#
[root@localhost k8s_ctx]# kubectl exec --context="${CTX_CLUSTER1}" -n sample -c sleep "$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l
app=sleep -o jsonpath='{.items[0].metadata.name}')" -- curl helloworld.sample:5000/hello
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 84 100 84 0 0 8 0 0:00:10 0:00:10 --:--:-- 22
upstream connect error or disconnect/reset before headers. reset reason: local reset
This my clusters info and
[root@localhost istio-1.8.1]# istioctl version
client version: 1.8.1
control plane version: 1.8.1
data plane version: 1.8.1 (8 proxies)
[root@localhost istio-1.8.1]# istioctl pc endpoint sleep-854565cb79-77lt7 --port 5000
ENDPOINT STATUS OUTLIER CHECK CLUSTER
192.167.102.190:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
192.169.169.7:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
[root@localhost istio-1.8.1]# kubectl --context cluster1 get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
helloworld-v1-5897696f47-5lsqp 2/2 Running 0 73m 192.167.102.190 localhost.localdomain <none> <none>
sleep-854565cb79-77lt7 2/2 Running 0 73m 192.167.102.130 localhost.localdomain <none> <none>
[root@localhost istio-1.8.1]# kubectl --context cluster2 get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
helloworld-v2-7bbf4994d7-k577f 2/2 Running 0 73m 192.169.169.7 node-79-79 <none> <none>
sleep-8f795f47d-74qgz 2/2 Running 0 73m 192.169.169.21 node-79-79 <none> <none>
question from:
https://stackoverflow.com/questions/65951684/i-install-primary-remote-on-different-networks-but-cross-clusters-pod-access-f 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…