Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

MetalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. It will monitor for services with the type LoadBalancer and assign them an IP address from a virtual pool.

The Charmed MetalLB Operator delivers automated operations management from day 0 to day 2 using the "MetalLB Load Balancer Implementation for Bare Metal Kubernetes". It is an open source, production-ready charm for Juju.

The Charmed MetalLB Operator provides Layer 2 (with ARP Address Resolution Protocol) or BGP(Border Gateway Protocol) to expose services.

MetalLB has support for local traffic, meaning that the machine that receives the data will be the machine that services the request. It is not suggested to use a virtual IP with high traffic workloads because only one machine will receive the traffic for a service - the other machines are solely used for failover.

BGP does not have this limitation but does see nodes as the atomic unit. This means if the service is running on two of five nodes then only those two nodes will receive traffic, but they will each receive 50% of the traffic even if one of the nodes has three pods and the other only has one pod running on it. It is recommended to use node anti-affinity to prevent Kubernetes pods from stacking on a single node.

Note:

For more information on configuring MetalLB with Calico in BGP mode, please see this explanation of the required configuration from the MetalLB website

Preparation

Before deploying MetalLB, it is recommended to configure kube-proxy to use IPVS proxy mode with strict ARP enabled.

New clusters

It is easiest to configure IPVS proxy mode when deploying Charmed Kubernetes, by creating a bundle overlay with the following command:

cat > overlay-ipvs.yaml << EOF
applications:
  kubernetes-control-plane:
    options:
      proxy-extra-config: '{mode: ipvs, ipvs: {strictARP: true}}'
  kubernetes-worker:
    options:
      proxy-extra-config: '{mode: ipvs, ipvs: {strictARP: true}}'
EOF

Then, to deploy Charmed Kubernetes with the overlay:

juju deploy charmed-kubernetes --overlay overlay-ipvs.yaml

Existing clusters

For existing clusters, this configuration can be set by using the juju config command:

juju config kubernetes-control-plane proxy-extra-config='{mode: ipvs, ipvs: {strictARP: true}}'
juju config kubernetes-worker proxy-extra-config='{mode: ipvs, ipvs: {strictARP: true}}'

However, when changing proxy modes, kube-proxy will leave behind old iptables rules that are no longer valid. To clean up the old rules, you must wait until the kubernetes-control-plane and kubernetes-worker units have finished processing the config change. Then, reboot the Kubernetes host machines.

Deployment

Layer 2 mode

To deploy the operators, you will first need a Kubernetes model in Juju. Add your Charmed Kubernetes as a cloud to your Juju controller:

juju add-k8s ck8s --controller $(juju switch | cut -d: -f1)

Next, create a new Kubernetes model:

juju add-model metallb-system ck8s

Run the following command, which will fetch the charm from Charmhub and deploy it to your model:

juju deploy metallb --channel 1.28/stable --trust

Juju will now fetch Charmed MetalLB and begin deploying it to the Kubernetes cluster. This process can take several minutes depending on how provisioned (RAM, CPU, etc) your machine is. You can track the progress by running:

juju status --watch 1s

This command is useful for checking the status of Charmed MetalLB and gathering information about the containers hosting Charmed MetalLB. Some of the helpful information it displays include IP addresses, ports, state, etc. The command updates the status of Charmed MetalLB every second and as the application starts you can watch the status and messages of Charmed MetalLB change. Wait until the application is ready - when it is ready, juju status will show:

Model         Controller       Cloud/Region                     Version  SLA          Timestamp
juju-metallb  overlord         k8s-cloud/default                3.1.5    unsupported  13:32:58-05:00

App      Version  Status  Scale  Charm    Channel      Rev  Address        Exposed  Message
metallb           active      1  metallb  1.28/stable  9    10.152.183.85  no       

Unit        Workload  Agent  Address       Ports  Message
metallb/0*  active    idle   192.168.0.15

To exit the screen with juju status --watch 1s, enter Ctrl+c. If you want to further inspect Juju logs, can watch for logs with juju debug-log. More info on logging at juju logs.

Configuration

You will need to change the IP addresses allocated to MetalLB to suit your environment. The IP addresses can be specified as a range, such as “192.168.1.88-192.168.1.89”, or as a comma-separated list of pools in CIDR notation, such as “192.168.1.240/28, 10.0.0.0/28”.

Configuring the IP addresses can be done either at time of deployment via single-line config or later by changing the charm config via Juju.

This will adjust the default IPAddressPool.spec.addresses created by the charm according to the specification.

An example single-line config adjustment might look like:

juju deploy metallb --config iprange='192.168.1.88-192.168.1.89' --trust

Alternatively, you can change the config directly on the metallb charm at any time:

juju config metallb iprange="192.168.1.240/28, 10.0.0.0/28"

BGP mode

Since the Kubernetes operator charms for MetalLB do not yet support BGP mode, for now the recommended way to deploy MetalLB in BGP mode is to use the upstream manifests:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Configuration

The BGP configuration can then be performed by using a MetalLB ConfigMap.

Using MetalLB

Once deployed, MetalLB will automatically assign IPs from its pools to any service of type LoadBalancer. When the services are deleted, the IPs are available again.

We appreciate your feedback on the documentation. You can edit this page or file a bug here.

See the guide to contributing or discuss these docs in our public Mattermost channel.