Merge branch 'master' of code.ungleich.ch:ungleich-public/ungleich-staticcms

This commit is contained in:
Nico Schottelius 2021-06-14 11:09:15 +02:00
commit 301f750c73
3 changed files with 428 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

View file

@ -0,0 +1,227 @@
title: Making kubernetes kube-dns publicly reachable
---
pub_date: 2021-06-13
---
author: ungleich
---
twitter_handle: ungleich
---
_hidden: no
---
_discoverable: yes
---
abstract:
Looking into IPv6 only DNS provided by kubernetes
---
body:
## Introduction
If you have seen our
[article about running kubernetes
Ingress-less](/u/blog/kubernetes-without-ingress/), you are aware that
we are pushing IPv6 only kubernetes clusters at ungleich.
Today, we are looking at making the "internal" kube-dns service world
reachable using IPv6 and global DNS servers.
## The kubernetes DNS service
If you have a look at your typical k8s cluster, you will notice that
you usually have two coredns pods running:
```
% kubectl -n kube-system get pods -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-558bd4d5db-gz5c7 1/1 Running 0 6d
coredns-558bd4d5db-hrzhz 1/1 Running 0 6d
```
These pods are usually served by the **kube-dns** service:
```
% kubectl -n kube-system get svc -l k8s-app=kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 2a0a:e5c0:13:e2::a <none> 53/UDP,53/TCP,9153/TCP 6d1h
```
As you can see, the kube-dns service is running on a publicly
reachable IPv6 address.
## IPv6 only DNS
IPv6 only DNS servers have one drawback: they cannot be reached via DNS
recursions, if the resolver is IPv4 only.
At [ungleich we run a variety of
services](https://redmine.ungleich.ch/projects/open-infrastructure/wiki)
to make IPv6 only services usable in the real world. In case of DNS,
we are using **DNS forwarders**. They are acting similar to HTTP
proxies, but for DNS.
So in our main DNS servers, dns1.ungleich.ch, dns2.ungleich.ch
and dns3.ungleich.ch we have added the following configuration:
```
zone "k8s.place7.ungleich.ch" {
type forward;
forward only;
forwarders { 2a0a:e5c0:13:e2::a; };
};
```
This tells the DNS servers to forward DNS queries that come in for
k8s.place7.ungleich.ch to **2a0a:e5c0:13:e2::a**.
Additionally we have added **DNS delegation** in the
place7.ungleich.ch zone:
```
k8s NS dns1.ungleich.ch.
k8s NS dns2.ungleich.ch.
k8s NS dns3.ungleich.ch.
```
## Using the kubernetes DNS service in the wild
With this configuration, we can now access IPv6 only
kubernetes services directly from the Internet. Let's first discover
the kube-dns service itself:
```
% dig kube-dns.kube-system.svc.k8s.place7.ungleich.ch. aaaa
; <<>> DiG 9.16.16 <<>> kube-dns.kube-system.svc.k8s.place7.ungleich.ch. aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23274
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: f61925944f5218c9ac21e43960c64f254792e60f2b10f3f5 (good)
;; QUESTION SECTION:
;kube-dns.kube-system.svc.k8s.place7.ungleich.ch. IN AAAA
;; ANSWER SECTION:
kube-dns.kube-system.svc.k8s.place7.ungleich.ch. 27 IN AAAA 2a0a:e5c0:13:e2::a
;; AUTHORITY SECTION:
k8s.place7.ungleich.ch. 13 IN NS kube-dns.kube-system.svc.k8s.place7.ungleich.ch.
```
As you can see, the **kube-dns** service in the **kube-system**
namespace resolves to 2a0a:e5c0:13:e2::a, which is exactly what we
have configured.
At the moment, there is also an etherpad test service
named "ungleich-etherpad" running:
```
% kubectl get svc -l app=ungleichetherpad
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ungleich-etherpad ClusterIP 2a0a:e5c0:13:e2::b7db <none> 9001/TCP 3d19h
```
Let's first verify that it resolves:
```
% dig +short ungleich-etherpad.default.svc.k8s.place7.ungleich.ch aaaa
2a0a:e5c0:13:e2::b7db
```
And if that works, well, then we should also be able to access the
service itself!
```
% curl -I http://ungleich-etherpad.default.svc.k8s.place7.ungleich.ch:9001/
HTTP/1.1 200 OK
X-Powered-By: Express
X-UA-Compatible: IE=Edge,chrome=1
Referrer-Policy: same-origin
Content-Type: text/html; charset=utf-8
Content-Length: 6039
ETag: W/"1797-Dq3+mr7XP0PQshikMNRpm5RSkGA"
Set-Cookie: express_sid=s%3AZGKdDe3FN1v5UPcS-7rsZW7CeloPrQ7p.VaL1V0M4780TBm8bT9hPVQMWPX5Lcte%2BzotO9Lsejlk; Path=/; HttpOnly; SameSite=Lax
Date: Sun, 13 Jun 2021 18:36:23 GMT
Connection: keep-alive
Keep-Alive: timeout=5
```
(attention, this is a test service and might not be running when you
read this article at a later time)
## IPv6 vs. IPv4
Could we have achived the same with IPv4? The answere here is "maybe":
If the kubernetes service is reachable from globally reachable
nameservers via IPv4, then the answer is yes. This could be done via
public IPv4 addresses in the kubernetes cluster, via tunnels, VPNs,
etc.
However, generally speaking, the DNS service of a
kubernetes cluster running on RFC1918 IP addresses, is probably not
reachable from globally reachable DNS servers by default.
For IPv6 the case is a bit different: we are using globally reachable
IPv6 addresses in our k8s clusters, so they can potentially be
reachable without the need of any tunnel or whatsoever. Firewalling
and network policies can obviously prevent access, but if the IP
addresses are properly routed, they will be accessible from the public
Internet.
And this makes things much easier for DNS servers, which are also
having IPv6 connectivity.
The following pictures shows the practical difference between the two
approaches:
![](/u/image/k8s-v6-v4-dns.png)
## Does this make sense?
That clearly depends on your use-case. If you want your service DNS
records to be publicly accessible, then the clear answer is yes.
If your cluster services are intended to be internal only
(see [previous blog post](/u/blog/kubernetes-without-ingress/), then
exposing the DNS service to the world might not be the best option.
## Note on security
CoreDNS inside kubernetes is by default configured to allow resolving
for *any* client that can reach it. Thus if you make your kube-dns
service world reachable, you also turn it into an open resolver.
At the time of writing this blog article, the following coredns
configuration **does NOT** correctly block requests:
```
Corefile: |
.:53 {
acl k8s.place7.ungleich.ch {
allow net ::/0
}
acl . {
allow net 2a0a:e5c0:13::/48
block
}
forward . /etc/resolv.conf {
max_concurrent 1000
}
...
```
Until this is solved, we recommend to place a firewall before your
public kube-dns service to only allow requests from the forwarding DNS
servers.
## More of this
We are discussing
kubernetes and IPv6 related topics in
**the #hacking:ungleich.ch Matrix channel**
([you can signup here if you don't have an
account](https://chat.with.ungleich.ch)) and will post more about our
k8s journey in this blog. Stay tuned!

View file

@ -0,0 +1,201 @@
title: Building Ingress-less Kubernetes Clusters
---
pub_date: 2021-06-09
---
author: ungleich
---
twitter_handle: ungleich
---
_hidden: no
---
_discoverable: yes
---
abstract:
---
body:
## Introduction
On [our journey to build and define IPv6 only kubernetes
clusters](https://www.nico.schottelius.org/blog/k8s-ipv6-only-cluster/)
we came accross some principles that seem awkward in the IPv6 only
world. Let us today have a look at the *LoadBalancer* and *Ingress*
concepts.
## Ingress
Let's have a look at the [Ingress
definition](https://kubernetes.io/docs/concepts/services-networking/ingress/)
definiton from the kubernetes website:
```
Ingress exposes HTTP and HTTPS routes from outside the cluster to
services within the cluster. Traffic routing is controlled by rules
defined on the Ingress resource.
```
So the ingress basically routes from outside to inside. But, in the
IPv6 world, services are already publicly reachable. It just
depends on your network policy.
### Update 2021-06-13: Ingress vs. Service
As some people pointed out (thanks a lot!), a public service is
**not the same** as an Ingress. Ingress has also the possibility to
route based on layer 7 information like the path, domain name, etc.
However, if all of the traffic from an Ingress points to a single
IPv6 HTTP/HTTPS Service, effectively the IPv6 service will do the
same, with one hop less.
## Services
Let's have a look at how services in IPv6 only clusters look like:
```
% kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etherpad ClusterIP 2a0a:e5c0:13:e2::a94b <none> 9001/TCP 19h
nginx-service ClusterIP 2a0a:e5c0:13:e2::3607 <none> 80/TCP 43h
postgres ClusterIP 2a0a:e5c0:13:e2::c9e0 <none> 5432/TCP 19h
...
```
All these services are world reachable, depending on your network
policy.
## ServiceTypes
While we are at looking at the k8s primitives, let's have a closer
look at the **Service**, specifically at 3 of the **ServiceTypes**
supported by k8s, including it's definition:
### ClusterIP
The k8s website says
```
Exposes the Service on a cluster-internal IP. Choosing this value
makes the Service only reachable from within the cluster. This is the
default ServiceType.
```
So in the context of IPv6, this sounds wrong. There is nothing that
makes an global IPv6 address be "internal", besides possible network
policies. The concept is probably coming from the strict difference of
RFC1918 space usually used in k8s clusters and not public IPv4.
This difference does not make a lot of sense in the IPv6 world though.
Seeing **services as public by default**, makes much more sense.
And simplifies your clusters a lot.
### NodePort
Let's first have a look at the definition again:
```
Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service routes,
is automatically created. You'll be able to contact the NodePort
Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
```
Conceptually this can be similarily utilised in the IPv6 only world
like it does in the IPv4 world. However given that there are enough
addresses available with IPv6, this might not be such an interesting
ServiceType anymore.
### LoadBalancer
Before we have a look at this type, let's take some steps back
first to ...
## ... Load Balancing
There are a variety of possibilities to do load balancing. From simple
round robin, to ECMP based load balancing, to application aware,
potentially weighted load balancing.
So for load balancing, there is usually more than one solution and
there is likely not one size fits all.
So with this said, let.s have a look at the
**ServiceType LoadBalancer** definition:
```
Exposes the Service externally using a cloud provider's load
balancer. NodePort and ClusterIP Services, to which the external load
balancer routes, are automatically created.
```
So whatever the cloud provider offers, can be used, and that is a good
thing. However, let's have a look at how you get load balancing for
free in IPv6 only clusters:
## Load Balancing in IPv6 only clusters
So what is the most easy way of reliable load balancing in network?
[ECMP (equal cost multi path)](https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing)
comes to the mind right away. Given that
kubernetes nodes can BGP peer with the network (upstream or the
switches), this basically gives load balancing to the world for free:
```
[ The Internet ]
|
[ k8s-node-1 ]-----------[ network ]-----------[ k8s-node-n]
[ ECMP ]
|
[ k8s-node-2]
```
In the real world on a bird based BGP upstream router
this looks as follows:
```
[18:13:02] red.place7:~# birdc show route
BIRD 2.0.7 ready.
Table master6:
...
2a0a:e5c0:13:e2::/108 unicast [place7-server1 2021-06-07] * (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3554 on eth0
unicast [place7-server4 2021-06-08] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3564 on eth0
unicast [place7-server2 2021-06-07] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:38cc on eth0
unicast [place7-server3 2021-06-07] (100) [AS65534i]
via 2a0a:e5c0:13:0:224:81ff:fee0:db7a on eth0
...
```
Which results into the following kernel route:
```
2a0a:e5c0:13:e2::/108 proto bird metric 32
nexthop via 2a0a:e5c0:13:0:224:81ff:fee0:db7a dev eth0 weight 1
nexthop via 2a0a:e5c0:13:0:225:b3ff:fe20:3554 dev eth0 weight 1
nexthop via 2a0a:e5c0:13:0:225:b3ff:fe20:3564 dev eth0 weight 1
nexthop via 2a0a:e5c0:13:0:225:b3ff:fe20:38cc dev eth0 weight 1 pref medium
```
## TL;DR
We know, a TL;DR at the end is not the right thing to do, but hey, we
are at ungleich, aren't we?
In a nutshell, with IPv6 the concept of **Ingress**,
**Service** and the **LoadBalancer** ServiceType
types need to be revised, as IPv6 allows direct access without having
to jump through hoops.
If you are interesting in continuing the discussion,
we are there for you in
**the #hacking:ungleich.ch Matrix channel**
[you can signup here if you don't have an
account](https://chat.with.ungleich.ch).
Or if you are interested in an IPv6 only kubernetes cluster,
drop a mail to **support**-at-**ungleich.ch**.