Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
211 views
in Technique[技术] by (71.8m points)

amazon web services - Split Horizon DNS in local Minikube cluster and AWS

Initially, I've deployed my frontend web application and all the backend APIS in AWS ECS, each of the backend APIs has a Route53 record, and the frontend is connected to these APIs in the .env file. Now, I would like to migrate from ECS to EKS and I am trying to deploy all these application in a Minikube local cluster. I would like to keep my .env in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the ECS. In short, first local(Minikube cluster)then external(AWS). How to implement this in Kubernetes?

http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS

.env

BACKEND_API_URL = http://backendapi.learning.com

one of the example in the code in which the frontend is calling the backend API

export const ping = async _ => {
    const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
    const json = await res.json();
    return json;
}
question from:https://stackoverflow.com/questions/65859804/split-horizon-dns-in-local-minikube-cluster-and-aws

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Assuming that your setup is:

  • Basing on microservices architecture.
  • Applications deployed in Kubernetes cluster (frontend and backend) are Dockerized
  • Applications are capable to be running on top of Kubernetes.
  • etc.

You can configure your Kubernetes cluster (minikube instance) to relay your request to different locations by using Services.


Service

In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.

Some of the types of Services are following:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
  • ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

You can use Headless Service with selectors and dnsConfig (in Deployment manifest) to achieve the setup referenced in your question.

Let me explain more:


Example

Let's assume that you have a backend:

  • nginx-one - located inside and outside

Your frontend manifest in most basic form should look following:

  • deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: frontend
  replicas: 1
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: ubuntu
        image: ubuntu
        command: 
        - sleep
        - "infinity" 
      dnsConfig: # <--- IMPORTANT 
        searches: 
        - DOMAIN.NAME 

Taking specific look on:

      dnsConfig: # <--- IMPORTANT 
        searches: 
        - DOMAIN.NAME 

Dissecting above part:

  • dnsConfig - the dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to "None", the dnsConfig field has to be specified.

  • searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.

As for the Services for your backends.

  • service.yaml:
apiVersion: v1
kind: Service
metadata:
  name: nginx-one
spec:
  clusterIP: None # <-- IMPORTANT
  selector:
    app: nginx-one
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80

Above Service will tell your frontend that one of your backends (nginx) is available through a Headless service (why it's Headless will come in hand later!). By default you could communicate with it by:

  • service-name (nginx-one)
  • service-name.namespace.svc.cluster.local (nginx-one.default.svc.cluster.local) - only locally

Connecting to your backend

Assuming that you are sending the request using curl (for simplicity) from frontend to backend you will have a specific order when it comes to the DNS resolution:

  • check the DNS record inside the cluster
  • check the DNS record specified in dnsConfig

The specifics of connecting to your backend will be following:

  • If the Pod with your backend is available in the cluster, the DNS resolution will point to the Pod's IP (not ClusterIP)
  • If the Pod backend is not available in the cluster due to various reasons, the DNS resolution will first check the internal records and then opt to use DOMAIN.NAME in the dnsConfig (outside of minikube).
  • If there is no Service associated with specific backend (nginx-one), the DNS resolution will use the DOMAIN.NAME in the dnsConfig searching for it outside of the cluster.

A side note!

The Headless Service with selector comes into play here as its intention is to point directly to the Pod's IP and not the ClusterIP (which exists as long as Service exists). If you used a "normal" Service you would always try to communicate with the ClusterIP even if there is no Pods available matching the selector. By using a headless one, if there is no Pod, the DNS resolution would look further down the line (external sources).


Additional resources:


EDIT:

You could also take a look on alternative options:

Alernative option 1:

  • Use rewrite rule plugin in CoreDNS to rewrite DNS queries for backendapi.learning.com to backendapi.default.svc.cluster.local

Alernative option 2:

You can also use Configmaps to re-use .env files.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...