Kubernetes DNS architecture evolution of the road (Kube-dns, CoreDNS)

When done (this article is to learn "kubernetes network Definitive Guide," a book notes, because I was just learning k8s, and self-learning mode is to knock it over, and then add some of their own understanding. It will find a lot of content with the book exactly the same content, if the article is infringing on the book, please understand, after informing deletes if quoted or reprinted, please indicate the source - thank you)

Kubernetes DNS architecture evolution of the road (Kube-dns, CoreDNS)


Kubernetes DNS Service currently has two implementations, kube-dns and CoreDNS.

1, Kube-dns works

Kube-dns architecture has experienced two relatively large change. Use etcd + kube2sky + SkyDNS architecture before 1.3 kubernetes. Use kubedns + dnsmasq + exechealthz architecture after 1.3 kubernetes, both architectures have taken advantage of the ability SkyDNS. SkyDNS support forward lookup (A records), service lookup (SRV records) and reverse IP address lookup (PTR record).

+ + SkyDNS kube2sky ETCD
ETCD + kube2sky + SkyDNS belongs to the first version of kube-dns architecture, it includes three important components, namely:
ETCD: DNS lookup to store all the required data;
kube2sky: observation and kubernetes API Server Endpoint change, and then synchronized state to kube-dns own etcd;
SkyDNS: 53 listens in port, provide external DNS queries based on data etcd in;
the following chart
(due to the size limit can not upload pictures, please understand)

SkyDNS etcd arranged as a storage back-end data, when the Chinese kubernetes cluster DNS request is received SkyDNS, SkyDNS etcd reads data from, and then the data package and returns, in response to a request for DNS. kube2sky update data in etcd by watch Service and Endpoint.
Assume etcd container ID is 0fb60dxxx, let's take a look at the Service of Cluster IP data stored in etcd in.

docker exec -it 0fb60d  etcdctl get /skydns/local/kube/svc/default/mysql-service 
{"host":"10.254.162.44", "priority":10, "weught":10."ttl":30,"targetstrip":0}

As indicated above, positioned default service mysql-service namespace to the cluster IP 10.254.162.44.
SkyDNS basic configuration information is also present in the etcd, e.g. address kube-dns IP, the domain name suffixes. As follows:

docker exec -it ofb6od  etcdctl get /skydns/config
{"dns-addr":"10.254.10.2:53","ttl":30,"domain":"cluster.local"}

Not difficult to see, SkyDNS is officially provide tracking components, kube2sky a bridge between kubernetes to SkyDNS, and etcd is stored kubernetes involves only part of the API object domain unloading. This version of kube-dns deployed directly SkyDNS process to provide domain name resolution services.

kubedns + dnsmasq + exechealthz
new evolution of kubedns architecture is shown in
(because the picture upload size limit can not, please understand)

kube-dns comprises three core components:

  • kubedns: observation of changes in Service and Endpoint from kubernetes API Server place and call SkyDNS of golang library. DNS records maintained in memory. kubedns dnsmasq as an upstream, the data provided at the time of DNS dnsmasq Cache misses.
  • dnsmasq: DNS configuration tools, monitoring port 53, to provide for the cluster DNS lookup service. dnsmasq to provide DNS cache, reducing the pressure kubedns query, the DNS name resolution and enhance the overall performance;
  • exechealthz: health inspection, dnsmasq checks kubedns and health, provide external / healthz HTTP interface to query the health of kubedns.

Version of the difference between this version and that kube-dns:

  • Observable from there kubernetes API Server Service and Endpoint
    object does not exist etcd, but is cached in memory, both to improve query performance, but also eliminates the need for the maintenance workload etcd stored;
  • Introduced dnsmasq container, received by the cluster kubernetes in its DNS request, the aim is to use dnsmasq cache module, improved analytical performance;
  • There is no direct deployment SkyDNS, but called SkyDNS of golang library, prior to the equivalent of SkyDNS and kube2sky integrated into one process;
  • Increased health checks;

During operation, dnsmasq set aside in memory of a buffer (default is 1G), retains recently used queries DNS records, if no record to query the cache to find, dnsmasq will check with the kubedns while the cached record .
Note: dnsmasq is to use C ++ to write a small program, a memory leak in the "small problems"
To sum up, no matter what kind of architecture, nature kube-dns is a kubernetes API object's monitor + SkyDNS;

2, the host of CoreDNS

CoreDNS as a hosted domain CNCF discovery programs, native integration kubernetes, its goal is to become a reference cloud native DNS server and service discovery solutions. Although, CoreDNS also taking TraeFik road, Jiang Wei hit SkyDNS.
Kubernetes1.12 from the beginning, CoreDNS become kubernetes default DNS server, but the default installation CoreDNS kubeadm time earlier. In kubernetes 1.9 version, a cluster can be installed kubeadm clearly mounted directly CoreDNS following.

kubeadm init  --feature-getas=CoreDNS=true

Below we will explain the basic usage CoreDNS architecture design and detail.
From a functional point of view, CoreDNS like DNS is a generic scheme, which greatly expand their functions through plug mode, to accommodate different scenarios.
CoreDNS has the following three characteristics:

  • Plug-oriented (Plugins). Caddy server based framework, CoreDNS chain architecture implements a widget, the bare end of the large number of applications as a plug abstract form (e.g., Kubernetes DNS service discovery, Prometheus) monitoring exposure to the user. CoreDNS preconfigured manner different strand strung plug, plug-order execution logic. In compiling level, the user selects the required plug-ins compiled into the final executable file, so that higher operating efficiency. Go CoreDNS using language, all from the code level, it seems, each plug-in components CoreDNS only implements the interface defined it. Third-party developers just follow CoreDNS
    Plugin API to write custom plug-ins can be easily integrated into CoreDNS in.
  • Configuration simplification. Introducing more expressive DSL, i.e. Corefile form profile (also based on the open frame caddy).
  • Integration solutions. The difference is that dns-Kube
    "triple play" architecture, COreDNS compiler is a separate executable files, built-in cache, the back-end storage management and health checking, without third-party components aid for other functions, making deployment more convenient and more secure memory management.

Corefile Know

Corefile is CoreDNS configuration file (from Caddy frame profile Caddyfile), which defines:

  • DNS server to listen for any agreement which port (you can define multiple server simultaneously monitor different ports);
  • Which zone DNS authority responsible for DNS resolution
  • DNS server load those plug-ins

Typically, a typical Corefile following format:

ZONR:{PORT} {
    [PLUGIN] ...
}
  • ZONE: define the DNS server responsible for the zone, PORT is optional, the default is 53;
  • The PLUGIN: define the DNS serve to load plug-ins, each plug-in can have a plurality of parameters such as:
. {
    chaos CoreDNS-001
}

The expression profiles of appeal are:. DNS server is responsible for parsing the root domain, wherein the insert is chaos and there is no argument.

1) defines DNS server
a DNS server simple configuration file as follows:

.{}

That DNS server listening on port 53, and does not apply any plug-ins. If at this time to define additional DNS server, need to ensure that the listening port do not conflict. If the increase in the original zone on the DNS server, the guarantee does not conflict zone. E.g:

. {}
.:54 {}

As indicated above, another DNS server listens on port 54, is responsible for the root domain. Parsing.
Another example:

example.org {
    whomi
}
org {
    whomi
}

This is the same DNS server is responsible for parsing different zone, however, and there are different plug-in chain.

2) Reverse Zone is defined
similarly to other root DNS servers, Corefile can also be defined Reverse Zone:

0.0.10.in-addr.arpa {
    whoami
}

Or simplified version:

10.0.0.0/24 {
    whomi
}

Different 3) using a communication protocol
CoreDNS addition to supporting the DNS protocol, and also supports TLS GRPC, i.e., DNS-over-TLS and DNS-over-gRPC mode. E.g:

tls://example.org:1443 {
    #...
}

Plug-in work mode

After CoreDNS start, it will start a different DNS server based on the profile, each DNS server has its own plug-in chain. When a new DNS request, it will undergo the following three-step logic:
(1) If the current DNS server request plurality Zone, then use the best match selection principle greedy Zone;
(2) Once a match is found in the DNS server , in the order defined plugin.cfg, convenient plug-in on the plug-chain;
(3) each widget determines whether to process the current request, we will have the following possibilities:

  • Current plug-in requests are processed. The plugin generating a corresponding response, and returned to the client, the request this time ends, the next will not be called a plug, such as plug whoami.
  • Plug-in processing the current request is not. Under the direct call to a plug. If a plug-in performs the last error, the server returns SERVFALL response.
  • The current request is processed in Fallthrough widget form. If the request is likely to jump to the next plug in the plug-in process, a process known as fallthrough, and keyword fallthrough decide whether to allow this operation. For example, host plug-in try / etc / hosts query domain name, if not next to the query will be called a plug-in;
  • Request carries a hint in the process: the request is processed plug-ins and add certain message (hint) in its response, referred to a plug with continued treatment. This additional information will form the final response to the client side, such as metric plug.

CoreDNS request processing workflow

The following will be a practical Corefile an example, the workflow processing Detailed CoreDNS DNS request. Corefile follows:

coredns.io:5300 {
    file /etc/coredns/zones/coredns.io.db
}

example.io:53 {
    errors
    log
    file /etc/coredns/zones/example.io.db
}

example.net:53 {
    file /etc/coredns/zones/example.net.db
}

.:53 {
    errors
    log
    health
    rewrite name foo.example.com foo.default.svc.cluster.local
}

Not difficult to see through the configuration file, we define two DNS server (although there are four configuration blocks), and 53 were listening 5300 port. The above process Corefile translated into logic diagram shown in FIG:
(Because of size limitations could not upload image updating)

Each incoming requests will be the DNS server according to a plugin, cfg sequentially performed poorly defined which have been loaded.
It is noted that, although in: 53 is the plug health, but it does not appear in the logic above figures, the plug because participation request is not associated logic (i.e., not on the insert and the chain), but modified the DNS server configuration. Usually we can plug into two categories:

  • Normal widget: participation request associated logic, and the plug is inserted in the chain
  • Other plug-ins: do not participate in the request related to logic, it does not appear in the plug-in chain. Except for modifying the DNS server configuration, for example, health, tls plug.
发布了13 篇原创文章 · 获赞 6 · 访问量 342

Guess you like

Origin blog.csdn.net/WuYuChen20/article/details/104517206