K3s Cluster with MetalLB
Table of Contents
My setup #
My setup consists of one control-plane/master node and a single agent/worker node. Both runs on Alpine:
Node Type | IP Address |
---|---|
Control Plane | 192.168.100.190 |
Worker | 192.168.100.191 |
The Control Plane #
By default K3s has traefik
and servicelb
as its Ingress Controller and LoadBalancer. I’ll disable both, and later install Nginx and MetalLB. The options I’ll use to deploy the control plane:
curl -sfL https://get.k3s.io | sh -s - --cluster-init \
--write-kubeconfig-mode 644 \
--disable=traefik \
--disable=servicelb \
--token "@-seCr3T-S3V3R-T0k3n" \
--agent-token "S3cr3t-@ag3nT-t0k3N" \
--node-taint CriticalAddonsOnly=true:NoExecute \
--disable-cloud-controller
Option | Explanation |
---|---|
–cluster-init | Deploy a High Availability cluster with embedded etcd |
–write-kubeconfig-mode 644 | Set file permissions for the kubeconfig file to 644 so Rancher can consume it later |
–disable=traefik | Disable the default traefik ingress controller |
–disable=servicelb | Disable the default service load balancer |
–token | Generate a server token to join a node as a control plane |
–agent-token | Generate an agent/worker token for joining worker/agent nodes |
–node-taint CriticalAddonsOnly=true:NoExecute | Ensure that only critical system pods are scheduled on the control plane node, and any other pods will not be able to run on that node |
–disable-cloud-controller | Disable cloud-controller manager and all related controllers. This option indicates that everything will take place on the local network. |
Having a separate --token
(to join master/control plane) and --agent-token
(to join worker/agent) is very important. If no token is provided, a single random password will be generated to join both type of nodes.
The tokens will be located at:
/var/lib/rancher/k3s/server/node-token # for control plane or master
/var/lib/rancher/k3s/server/agent-token # for agent or worker
/var/lib/rancher/k3s/server/token # for control plane or master
Note:
node-token
is a symlink totoken
More info about token -> here.
Some useful options(depending on your setup):
Option | Explanation |
---|---|
–write-kubeconfig | Create kubeconfig in a “custom location”. By convention, it should be inside $HOME/.kube/config. On k3s, it’s located in /etc/rancher/k3s/k3s.yaml by default. |
–bind-address | Listen on a specific address if multiple NICs are present on your machine. |
–disable local-storage | Disable local-storage to use something else like LongHorn, but if you’re learning Kubernetes like me, just leave it that way for now. |
Initializing the primary control plane:
k3s-control-plane:~# curl -sfL https://get.k3s.io | sh -s - --cluster-init \
--write-kubeconfig-mode 644 \
--disable=traefik \
--disable=servicelb \
--token "@-seCr3T-S3V3R-T0k3n" \
--agent-token "S3cr3t-@ag3nT-t0k3N" \
--node-taint CriticalAddonsOnly=true:NoExecute \
--disable-cloud-controller
[INFO] Finding release for channel stable
[INFO] Using v1.25.7+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.7+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.7+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/rancher/k3s/k3s.env
[INFO] openrc: Creating service file /etc/init.d/k3s
[INFO] openrc: Enabling k3s service for default runlevel
[INFO] openrc: Starting k3s
* Caching service dependencies ...
/lib/rc/sh/gendepends.sh: /etc/init.d/k3s: line 39: sourcex: not found [ ok ]
* Starting k3s ... [ ok ]
k3s-control-plane:~#
The sourcex: not found
error is not a big deal. I’s an Alpine/OpenRC issue. You should be fine on systemd. To fix it, replace sourcex
with a .
(dot) in /etc/init.d/k3s, and restart the service:
k3s-control-plane:~# service k3s restart
* Stopping k3s ... [ ok ]
* Starting k3s ... [ ok ]
Confirm that the control plane components are up and running:
k3s-control-plane:~# k3s kubectl get nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes)
k3s-control-plane:~#
k3s-control-plane:~#
k3s-control-plane:~# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-control-plane Ready control-plane,etcd,master 98s v1.25.7+k3s1
k3s-control-plane:~#
k3s-control-plane:~#
k3s-control-plane:~#
k3s-control-plane:~# k3s kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-597584b69b-44b2h 0/1 ContainerCreating 0 105s
kube-system local-path-provisioner-79f67d76f8-7dfnr 0/1 ContainerCreating 0 105s
kube-system metrics-server-5f9f776df5-rdv8m 0/1 ContainerCreating 0 105s
k3s-control-plane:~#
k3s-control-plane:~# k3s kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-597584b69b-44b2h 1/1 Running 0 2m31s
kube-system local-path-provisioner-79f67d76f8-7dfnr 1/1 Running 0 2m31s
kube-system metrics-server-5f9f776df5-rdv8m 1/1 Running 0 2m31s
Note: If you messed up, run
k3s-uninstall.sh
to cleanup K3s and start from scratch. If the reinstallation is causing errors, reboot your machine to wipe all traces of cgroups created under/sys/fs/cgroup/unified/k3s
.
Pods running on master node #
From k3s docs: By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The node-taint parameter will allow you to configure nodes with taints, for example --node-taint CriticalAddonsOnly=true:NoExecute
.
It’s not recommended to run user workloads on the master. I expect things to break. Like the master node is running out of memory or storage. That’s the point of a home lab.
During the initialization of the master, I only allowed critical pods to run. But you can taint the master node to not schedule any pods at all:
kubectl taint --overwrite node master node-role.kubernetes.io/master=true:NoSchedule
--node-taint master=true:NoSchedule
Then you’ll need to add tolerations for the pods that should be running on the master. Pods such as coredns
, local-path-provisioner
, and metric-server
are scheduled in the kube-system namespace which is reserved for system-level kubernetes components, and these components are deployed on the master node. Here’s a good place to start for k3s and understand what’s happening. To learn more about tolerations and taints:
- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
- https://medium.com/kubernetes-tutorials/making-sense-of-taints-and-tolerations-in-kubernetes-446e75010f4e
- https://docs.openshift.com/container-platform/4.8/nodes/scheduling/nodes-scheduler-taints-tolerations.html
The CriticalAddonsOnly taint is used to mark pods that are part of critical system components that should always be running on the node. By default, Kubernetes includes several system-level pods in this category, including the coredns, local-path-provisioner, and metric-server pods.
When you deploy your master node with the --node-taint CriticalAddonsOnly=true:NoExecute
flag, you are effectively saying that you only want pods with a matching toleration to be scheduled on the master node. In this case, the CriticalAddonsOnly=true:NoExecute
taint tells the Kubernetes scheduler to only schedule pods that have a toleration for the CriticalAddonsOnly taint, and any other pods will be repelled from the node, or evicted from the node if they were already running and don’t have the required toleration.
Since the coredns, local-path-provisioner, and metric-server pods have the tolerations field with the CriticalAddonsOnly operator set to Exists, they should be scheduled on the master node and running without any issues. You can confirm this by running the kubectl:
# the worker has no taint
k3s-control-plane:~# kubectl describe node k3s-worker-01 | grep Taints
Taints: <none>
# describe the master node
k3s-control-plane:~# kubectl describe node k3s-control-plane | grep Taints
Taints: CriticalAddonsOnly=true:NoExecute
# pod description
k3s-control-plane:~# kubectl describe pods coredns -n kube-system | grep -i tol
Tolerations: CriticalAddonsOnly op=Exists
k3s-control-plane:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-597584b69b-44b2h 1/1 Running 5 (39m ago) 4d
kube-system local-path-provisioner-79f67d76f8-7dfnr 1/1 Running 9 (38m ago) 4d
kube-system metrics-server-5f9f776df5-rdv8m 1/1 Running 7 (39m ago) 4d
k3s-control-plane:~# for pod in coredns local-path-provisioner metrics-server; \
> do kubectl describe pods $pod -n kube-system | grep -i tol; done
Tolerations: CriticalAddonsOnly op=Exists
Tolerations: CriticalAddonsOnly op=Exists
Tolerations: CriticalAddonsOnly op=Exists
k3s-control-plane:~#
As you can see, the 3 pods have the CriticalAddonsOnly
tolerations set on the master.
About kubelet #
Another important thing to note is that the coredns, local-path-provisioner, and metric-server pods on the master node in k3s are static pods. Unlike pods managed by deployment/replicaset, daemonset, or statefulset, static pods are solely started and managed by the kubelet, which is embedded in the k3s and k3s-agent binary.
All pods in a Kubernetes cluster are managed by the kubelet in some way, but static pods have a distinct characteristic. Static pods are created and managed directly by the kubelet on a specific node(such as master or worker), independent of the control plane components such as ReplicaSets or Deployments.
The kubelet is reponsible for:
- Pod Management
- Node Health Monitoring
- Container Runtime Interaction
- Networking
- Volume Management
- Self-registration
Static pods are defined by their manifest files, typically located in a designated directory on the node. The kubelet on that node monitors the directory for changes and ensures that the static pods specified in the manifest files are running. On k3s
, it’s /var/lib/rancher/k3s/manifests
:
k3s-control-plane:/var/lib/rancher/k3s/server/manifests# ls
coredns.yaml local-storage.yaml metrics-server rolebindings.yaml
On the other hand, non-static pods, also known as dynamically created pods, are managed by the control plane components. For example, ReplicaSets, Deployments, DaemonSets, StatefulSets, and other higher-level resources define and manage the desired state of these pods. The control plane instructs the kubelet to create and manage the pods based on the specified configurations.
Regarding pod management in general, while static pods have a direct association with the kubelet, it is important to note that the kubelet is responsible for managing all pods on a node. This includes dynamically created pods managed by higher-level control plane components like ReplicaSets, Deployments, and StatefulSets to handle tasks such as starting, stopping, monitoring, and resource management for all pods on the node. It interacts with the container runtime, sets up networking, manages volumes, and reports node status back to the control plane.
The pods managed by kubelet are in /var/lib/rancher/kubelet/pods
(on both master and worker):
k3s-control-plane:/var/lib/kubelet/pods# ls | while read i; do cat "$i/etc-hosts" | tail -n1; done
10.42.0.35 metrics-server-5f9f776df5-rdv8m
10.42.0.36 local-path-provisioner-79f67d76f8-7dfnr
10.42.0.37 coredns-597584b69b-44b2h
-
https://stackoverflow.com/questions/65143885/how-does-the-kube-system-namesspace-get-created
-
https://stackoverflow.com/questions/65161698/how-are-pods-in-kube-system-namespace-managed
Analogy about Taints and Tolerations #
Here’s an analogy from chatGPT:
Taints and tolerations in Kubernetes can be thought of as a bouncer and a VIP list at a nightclub.
In this analogy, the nodes in a Kubernetes cluster are like the nightclub, and the pods are like the guests who want to enter. Taints are like the bouncer at the door who checks the guests before allowing them inside. Taints can be applied to nodes to indicate that only certain types of pods are allowed to run on them. For example, a node might be tainted to only allow pods with a specific label or those that require certain resources.
Tolerations, on the other hand, are like the VIP list that some guests are on. If a pod has a toleration that matches a taint on a node, it will be allowed to run on that node even if the taint would normally prevent it. Tolerations allow certain pods to bypass the restrictions imposed by taints.
Just like a bouncer and a VIP list can help ensure that only the right people get into a nightclub, taints and tolerations in Kubernetes help ensure that pods are only run on nodes that are suitable for them, helping to optimize performance and resource usage.
Taints and tolerations in Kubernetes can be thought of as the security team and guest list at a luxury hotel charity event.
In this analogy, the nodes in a Kubernetes cluster are like the luxury hotel, and the pods are like the guests who want to enter. Taints are like the security team that checks the guests before allowing them into the hotel. Taints can be applied to nodes to indicate that only certain types of pods are allowed to run on them. For example, a node might be tainted to only allow pods with a specific label or those that require certain resources.
Tolerations, on the other hand, are like the guest list that some guests are on. If a pod has a toleration that matches a taint on a node, it will be allowed to run on that node even if the taint would normally prevent it. Tolerations allow certain pods to bypass the restrictions imposed by taints.
Just like a security team and a guest list can help ensure that only authorized guests enter a luxury hotel charity event, taints and tolerations in Kubernetes help ensure that pods are only run on nodes that are suitable for them, helping to optimize performance and resource usage.
About kubeconfig #
On the control plane, the config is located at /etc/rancher/k3s/k3s.yaml
. The built in k3s kubectl
command was modified to look for the config file in that location. If you’ll be using the upstream kubectl
by k8s or other tools such as k9s
, copy the config file in ~/.kube/config
and export KUBECONFIG like so:
export KUBECONFIG="${HOME}/.kube/config"
Note: You also have a
kubectl
in/usr/local/bin
. It’s a symlink tok3s
.
Resources:
- https://github.com/k3s-io/k3s/issues/5558#issuecomment-1125480850
- https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Join Agent or Worker Node #
Important: I will be using Ansible to perform all tasks from the control plane. This is a good place to start if you’re comfortable with ansible. I have also included the commands to run manually at the end of the ansible playbook below.
View the tokens that was generated:
k3s-control-plane:/var/lib/rancher/k3s/server# cat token
K10b3e8e7f461113a4126e48a84507864e3626373bb7de6ccff6d353264390cda23::server:@-seCr3T-S3V3R-T0k3n
k3s-control-plane:/var/lib/rancher/k3s/server# cat agent-token
K10b3e8e7f461113a4126e48a84507864e3626373bb7de6ccff6d353264390cda23::node:S3cr3t-@ag3nT-t0k3N
k3s-control-plane:/var/lib/rancher/k3s/server# ls -l token
-rw------- 1 root root 97 Mar 23 19:22 token
k3s-control-plane:/var/lib/rancher/k3s/server# ls -l agent-token
-rw------- 1 root root 94 Mar 23 19:22 agent-token
k3s-control-plane:/var/lib/rancher/k3s/server#
My /etc/hosts
:
k3s-control-plane:~# cat /etc/hosts
127.0.0.1 localhost
::1 localhost
192.168.100.190 k3s-control-plane controlplane master
192.168.100.191 k3s-worker-01 worker-01
and /etc/ansible/hosts
:
k3s-control-plane:~# cat /etc/ansible/hosts
[controlplane]
k3s-control-plane ansible_connection=local
[workers]
k3s-worker-01
[cluster:children]
controlplane
workers
My playbook:
---
- name: Joining Worker Node
hosts: workers
tasks:
- name: Add k3s-control-plane to /etc/hosts
lineinfile:
path: /etc/hosts
line: "192.168.100.190 k3s-control-plane"
state: present
- name: Copying join-worker.sh to worker node
copy:
src: /root/k3s-playbooks/join-worker.sh
dest: /tmp/join-worker.sh
mode: 0755
- name: Running join-worker.sh
command: sh /tmp/join-worker.sh
- name: Replacing sourcex with a dot
replace:
path: /etc/init.d/k3s-agent
regexp: sourcex
replace: '.'
- name: Restarting k3s-agent service
service:
name: k3s-agent
state: restarted
- name: Deleting join-worker.sh
file:
path: /tmp/join-worker.sh
state: absent
The content of join-worker.sh
:
curl -sfL https://get.k3s.io | K3S_URL=https://k3s-control-plane:6443 \
K3S_TOKEN=K10b3e8e7f461113a4126e48a84507864e3626373bb7de6ccff6d353264390cda23::node:S3cr3t-@ag3nT-t0k3N \
sh -s -
Here are the commands to run manually:
echo 'k3s-control-plane 192.168.100.190' >> /etc/hosts
curl -sfL https://get.k3s.io | K3S_URL=https://k3s-control-plane:6443 \
K3S_TOKEN=K10b3e8e7f461113a4126e48a84507864e3626373bb7de6ccff6d353264390cda23::node:S3cr3t-@ag3nT-t0k3N \
sh -s -
sed -i.bak 's/sourcex/./g' /etc/init.d/k3s-agent
rm -f /etc/init.d/k3s-agent.bak
service k3s-agent restart
Joining the node:
k3s-control-plane:~/k3s-playbooks# ansible-playbook join-worker.yaml
PLAY [Joining Worker Node] *****************************
TASK [Gathering Facts] *********************************
ok: [k3s-worker-01]
TASK [Add k3s-control-plane to /etc/hosts] *************
changed: [k3s-worker-01]
TASK [Copying join-worker.sh to worker node] ***********
changed: [k3s-worker-01]
TASK [Running join-worker.sh] **************************
changed: [k3s-worker-01]
TASK [Replacing sourcex with a dot] ********************
changed: [k3s-worker-01]
TASK [Restarting k3s-agent service] ********************
changed: [k3s-worker-01]
TASK [Deleting join-worker.sh] *************************
changed: [k3s-worker-01]
PLAY RECAP **********************
k3s-worker-01 : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
All good:
k3s-control-plane:~/k3s-playbooks# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-control-plane Ready control-plane,etcd,master 46h v1.25.7+k3s1
k3s-worker-01 Ready <none> 3m16s v1.25.7+k3s1
Set Worker role and type #
kubernetes.io/role=worker and node-type=worker are two common labels that are used to identify worker nodes in the cluster.
The kubernetes.io/role=worker label will label a node as worker improve the readability of the output from kubectl get nodes command:
k3s-control-plane:~# kubectl label nodes k3s-worker-01 kubernetes.io/role=worker
node/k3s-worker-01 labeled
k3s-control-plane:~#
k3s-control-plane:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-control-plane Ready control-plane,etcd,master 46h v1.25.7+k3s1
k3s-worker-01 Ready worker 7m11s v1.25.7+k3s1
The node-type=worker label is a user-defined label that can be used to further categorize worker nodes in the cluster. It can be used to identify nodes with specific characteristics, such as nodes that are optimized for running GPU workloads(crypto stuff), or nodes that are located in a specific data center. I’m just gonna call mine local-worker for now:
k3s-control-plane:~# kubectl label nodes k3s-worker-01 node-type=local-worker
node/k3s-worker-01 labeled
k3s-control-plane:~#
Both of these labels are useful for managing worker nodes in the cluster. For example, you can use these labels to:
-
Schedule workloads on specific worker nodes: By using labels to identify worker nodes with specific characteristics, you can schedule workloads that require those characteristics to run on those nodes.
-
Monitor worker node health: By using labels to identify worker nodes, you can monitor the health of specific sets of nodes and troubleshoot issues more easily.
-
Apply policies to worker nodes: By using labels to identify worker nodes, you can apply policies that only affect specific sets of nodes, such as policies that restrict network access or apply security patches.
Cluster Access from another workstation #
If you wish to use kubectl to access the cluster from a workstation that is not part of the control plane, you will need to copy the control plane’s kubeconfig to the machine. First, copy or append the content of /etc/rancher/k3s/k3s.yaml
file to the $HOME/.kube/config
file on the target machine. Then, update the server IP address in the copied file to reflect the IP address of the control plane.
It’s important to note that when using kubectl
on a control plane, the server IP is typically set to localhost, as kubectl uses the API on localhost to fetch information. On a remote workstation, you’ll need to specify the IP address of a control plane. If you have multiple control planes, choose one IP address to use.
Accessing the cluster from a worker node, will give the following error:
The connection to the server localhost:8080 was refused
Workers do not typically initiate actions on their own. Instead, they perform tasks as instructed by servers or control planes. The workers/agents do not possess a copy of the admin kubeconfig. If you require it, you must obtain it by copying it from a server node. It is generally not recommended to allow workers to use kubectl to access the cluster. It is best to let workers focus on their assigned tasks and leave the management of the cluster to the control plane.
Resources:
- https://github.com/k3s-io/k3s/issues/3862#issuecomment-898846144
- https://github.com/k3s-io/k3s/discussions/5564#discussioncomment-2747106
- https://github.com/k3s-io/k3s/issues/3862
- https://docs.k3s.io/cluster-access
Install metalLB #
I’m using metalLb so that a service of type LoadBalancer
and Ingress
will receive an external IP. All the steps I’ll be taking regarding MetalLB are based on the information I obtained from the documentation.
Apply the following manifest to install metalLB:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
Note: As of this writing,
0.13.9
is the latest version. Go here for latest version.
Controller and Speaker pods are up and running on the worker node:
k3s-control-plane:~/metallb# kubectl get pods -n metallb-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-844979dcdc-fzlnz 1/1 Running 3 (71m ago) 46h 10.42.1.3 k3s-worker-01 <none> <none>
speaker-96tlq 1/1 Running 2 (71m ago) 46h 192.168.100.191 k3s-worker-01 <none> <none>
The speaker pods are deployed as a DaemonSet:
k3s-control-plane:~/metallb# kubectl get daemonset -n metallb-system speaker
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
speaker 1 1 1 1 1 kubernetes.io/os=linux 45h
Only one is available since I only have a single worker node. You should have as many speaker pods as you have worker nodes in the cluster.
Regarding my explanation about kubelet earlier, a DaemonSet is not managed by kubelet. If you look at pods managed solely by kubelet on my worker node, you won’t see the speaker:
k3s-worker-01:/var/lib/kubelet/pods# ls | while read i; do cat "$i/etc-hosts" | tail -n1; done
10.42.1.7 controller-844979dcdc-fzlnz
192.168.100.190 k3s-control-plane
Now an IP Address Pool needs to be created and announced/advertised on layer 2. My config:
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: address-pool-01
namespace: metallb-system
spec:
addresses:
- 192.168.100.230-192.168.100.240
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: address-pool-01
namespace: metallb-system
spec:
ipAddressPools:
- address-pool-01
The above manifest will provides IPs from 192.168.100.230
to 192.168.100.240
. Apply the config:
k3s-control-plane:~/metallb# kubectl apply -f address-pool-01.yaml
ipaddresspool.metallb.io/address-pool-01 created
l2advertisement.metallb.io/address-pool-01 created
View the Address Pool and Advertisement:
k3s-control-plane:~/metallb# kubectl describe -n metallb-system IPAddressPool address-pool-01 | grep Addresses -A 2
Addresses:
192.168.100.230-192.168.100.240
Auto Assign: true
k3s-control-plane:~/metallb# kubectl describe -n metallb-system L2Advertisement address-pool-01 | grep Ip -A 1
Ip Address Pools:
address-pool-01
Deployment of Type Loadbalancer #
Now I will create a LoadBalancer type deployment. An IP address within the range 192.168.100.230-192.168.100.240 should be provided for the load balancer:
k3s-control-plane:~# kubectl create deploy nginx-web --image nginx:latest
deployment.apps/nginx-web created
Exposing the deployment as a service on port 80 as type load balancer:
k3s-control-plane:~# kubectl expose deploy nginx-web --port 80 --type LoadBalancer
service/nginx-web exposed
List all services, and as you can see, an external IP was provided:
k3s-control-plane:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 61d
nginx-web LoadBalancer 10.43.91.16 192.168.100.230 80:32240/TCP 3m55s
Browse to the LoadBalancer service, and you should get a response from Nginx:
k3s-control-plane:~# curl -s 192.168.100.230 | html2text
****** Welcome to nginx! ******
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
Nginx Ingress #
Installing Nginx Baremetal:
k3s-control-plane:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.1/deploy/static/provider/baremetal/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
Make sure the controller pod is running:
k3s-control-plane:~# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-59jxd 0/1 Completed 0 10m
ingress-nginx-admission-patch-vklz6 0/1 Completed 1 10m
ingress-nginx-controller-8579d5f79c-bxcn5 1/1 Running 0 10m
For the deployment,service and ingress rules, I’ll follow the resources mentioned below:
- https://www.youtube.com/watch?v=72zYxSxifpM
- https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/kubernetes/ingress/controller/nginx/features
Deploying service-a
, and service-b
:
k3s-control-plane:~/ingress# kubectl apply -f service-a.yaml
configmap/service-a created
configmap/service-a-nginx.conf created
deployment.apps/service-a created
service/service-a created
k3s-control-plane:~/ingress#
k3s-control-plane:~/ingress# kubectl apply -f service-b.yaml
configmap/service-b created
configmap/service-b-nginx.conf created
deployment.apps/service-b created
service/service-b created
k3s-control-plane:~/ingress#
Both ready and running:
k3s-control-plane:~/ingress# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-web-668b6cbbc-t44fn 1/1 Running 1 (44m ago) 19h
service-a-55fd7bfc4c-hfvhm 1/1 Running 0 3m43s
service-b-6857fbdbcb-drv44 1/1 Running 0 3m38s
The ingress rules are for two different domains public.service-a.com
and public.service-b.com
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-a
spec:
ingressClassName: nginx
rules:
- host: public.service-a.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-a
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-b
spec:
ingressClassName: nginx
rules:
- host: public.service-b.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-b
port:
number: 80
---
Applying the manifest:
k3s-control-plane:~/ingress# kubectl apply -f ingress-routing-by-domain.yaml
ingress.networking.k8s.io/service-a created
ingress.networking.k8s.io/service-b created
By default it’s a node port:
k3s-control-plane:~/ingress# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 62d
nginx-web LoadBalancer 10.43.91.16 192.168.100.230 80:32240/TCP 19h
service-a ClusterIP 10.43.115.8 <none> 80/TCP 19m
service-b ClusterIP 10.43.135.146 <none> 80/TCP 19m
k3s-control-plane:~/ingress# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.43.113.14 <none> 80:30439/TCP,443:30744/TCP 18h
ingress-nginx-controller-admission ClusterIP 10.43.41.127 <none> 443/TCP 18h
Edit the ingress-ginx-controller
and replace nodePort
with LoadBalancer
. Save and exit, and an external load balancer IP should be provided:
k3s-control-plane:~/ingress# kubectl edit svc ingress-nginx-controller -n ingress-nginx
service/ingress-nginx-controller edited
k3s-control-plane:~/ingress# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.43.113.14 192.168.100.231 80:30439/TCP,443:30744/TCP 19h
ingress-nginx-controller-admission ClusterIP 10.43.41.127 <none> 443/TCP 19h
As you can see, the controller has two NodePort entries: 30439
and 30744
. This means that you can access the Nginx service-a
and service-b
by simply browsing to the IP address of a worker node and one of the mentioned ports. Here’s how to access it via the nodeport:
MacBook-Pro:~ root# curl -k -H "Host: public.service-a.com" https://192.168.100.191:30744/
"/" on service-a
From a production or staging infrastructure perspective, whether this poses a security issue depends on your specific requirements and deployment environment.
Add the domains in the /etc/hosts
file of a host that is not part of the cluster and then access the domains to simulate a real-world scenario:
MacBook-Pro:~ root# echo "192.168.100.231 public.service-a.com public.service-b.com" >> /etc/hosts
MacBook-Pro:~ root# for svc in a b; \
> do for path in a b '/'; \
> do curl -k https://public.service-$svc.com/path-$path.html; \
> done; done;
"/path-a.html" on service-a
"/path-b.html" on service-a
service-a 404 page
"/path-a.html" on service-b
"/path-b.html" on service-b
service-b 404 page
MacBook-Pro:~ root#
The happiness of your life depends upon the quality of your thoughts.
– Marcus Aurelius