Managing multiple kubeconfigs by merging them

Recently I've been deploying k8s clusters on different cloud providers, with different automation stacks and in different configurations. As a result, I ended up with a bunch of of kubeconfigs polluting my ~/.kube directory.

As much as kubectl is an swiss knife for all things k8s, it still can't quite make dealing with different kubeconfigs a breeze. The annoying part is that you have to have a single kubeconfig environment with all your clusters, users, contexts listed, and this is not the case when you have multiple kubeconfigs. So, naturally one would want to merge them into a single kubeconfig.

To do that I had to write a small script that does the following high level steps:

  1. Backup existing ~/.kube/config file: you don't want to lose your existing kubeconfig because you messed up something in your script or overwrote it with a wrong kubeconfig.
  2. Find paths to all kubeconfig files in a particular directory and set them as a KUBECONFIG environment variable.
  3. merge these kubeconfigs into a single ~/.kube/config file.

Here is the script:

cp ~/.kube/config ~/.kube/bak/config_$(date +"%Y-%m-%d_%H:%M:%S")_bak && \
KUBECONFIG=$(find ~/.kube/cluster_configs -type f | tr '\n' ':') \
kubectl config view --flatten > ~/.kube/config

Nothing fancy, but it does the job. Essentially the rule of thumb that I follow now is to keep all my kubeconfigs in ~/.kube/cluster_configs directory and run this script whenever I add or delete a kubeconfig in this directory. The resulting kubeconfig will contain all the contexts from all of the files and I could do kubectl config use-context <context> to switch between them.

Using skopeo container image

By now you know I hate to "install" things on the systems I work on, and that is because I have too many machines I carry work on. Hence, I prefer to containerize all the things and use handy aliases.

Here is one for skopeo to copy images between registries:

alias skopeo='sudo docker run --rm \
-v ~/.config/gcloud:/root/.config/gcloud:ro \
-v ~/.docker/config.json:/tmp/auth.json:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker-credential-gcr:/usr/bin/docker-credential-gcr \
-v $(pwd):/workdir \
-w /workdir \
quay.io/skopeo/stable:v1.14'
$ skopeo --version
skopeo version 1.14.2

Note this quirky docker-credential-gcr binary mount, this is an authentication helper for skopeo to authenticate with GCP. Other clouds might require other helpers or file mounts.

A nice blog post on how to use skopeo different transports - https://www.redhat.com/sysadmin/7-transports-features

Adding border to the logos

It is super easy nowadays to generate a decent logo for your OSS project using any of genAI tool (dall-e, bing, etc). But one of the side-effects of the image generation might be the noise around the edges.

If that's your case you can apply a border (offset) to your logo to mask the potential rough edges. I can recommend borderize service that makes it a matter of a few clicks.

Upscaling images

You know how in TV shows with IMDB rating ranging from 5 to 7 cops use that magic image recognition software that transforms the cctv image of a bad guy from the "mashed potato" quality to the Vogue cover? Yeah...

But recently I needed to upscale the AI generated logo I made for Clabernetes project, and the quality was "meh", since it was a free service from Bing and whatnot. So whilst it was printable, the DPI was not good enough. So I decided to try my luck and google for these AI-powered image upscale services; frankly, my hopes were quite low.

The first hit with a fishy DNS name - https://www.upscale.media/ - did not reinforce my beliefs that this is all a gimmick. But I tried, and it was legit good. It upscaled my logo while removed the noise and blurriness from the original image.

Here is the comparison:

upscale

So yeah, something I wanted to save here because I will likely use it next time as well.

OpenStack Client Container Image

I like the portability, managability and package manager agnostic nature of container images. Especially for the tools I use couple of times a month. And even more so for Python tools that don't have native wheels for all their dependencies. Like OpenStack Client.

So I built a small multi-stage Dockerfile to build a container image with OpenStack Client and all its dependencies. It's based on the official Python image and has a slim footprint:

FROM python:3.10-alpine as builder

ARG VERSION=6.4.0
ARG OCTAVIA_VERSION=3.6.0

RUN apk add gcc python3-dev libc-dev linux-headers && \
    pip install wheel \
    python-openstackclient==${VERSION} \
    python-octaviaclient==${OCTAVIA_VERSION}


RUN pip freeze > requirements.txt && pip wheel -r requirements.txt -w /wheels

# Final image
FROM python:3.10-alpine

COPY --from=builder /wheels /wheels

RUN pip install --no-index --find-links=/wheels python-openstackclient python-octaviaclient

CMD ["openstack"]

You can pull the image from ghcr:

docker pull ghcr.io/hellt/openstack-client:6.4.0

To use this image you first need to source the env vars from your openrc file:

source myopenrc.sh

Then I prefer to install the alias openstack to my shell so that it feels like I have the client installed locally:

alias openstack="docker run --rm -it \
    -e OS_AUTH_URL=${OS_AUTH_URL} -e OS_PROJECT_ID=${OS_PROJECT_ID} \
    -e OS_USER_DOMAIN_NAME=${OS_USER_DOMAIN_NAME} \
    -e OS_PROJECT_NAME=${OS_PROJECT_NAME} \
    -e OS_USERNAME=${OS_USERNAME} -e OS_PASSWORD=${OS_PASSWORD} \
    ghcr.io/hellt/openstack-client:6.4.0 openstack $@"

Then you can use the client as usual:

❯ openstack server list
+-----------------------------+----------------+--------+-----------------------------+------------------------------+---------------------+
| ID                          | Name           | Status | Networks                    | Image                        | Flavor              |
+-----------------------------+----------------+--------+-----------------------------+------------------------------+---------------------+
| 0fa75185-0f76-482f-8cc3-    | k8s-w3-411e6d7 | ACTIVE | k8s-net-304e6df=10.10.0.11  | nesc-baseimages-             | ea.008-0024         |
| 38e4d60212c8                |                |        |                             | debian-11-latest             |                     |
-- snip --

Test coverage for Go integration tests

I have been working on containerlab for a while now; a project that once started as a simple idea of a tool that would create and wire up SR Linux containers grew into a full-blown network emulation tool loved by the community and used in labs by many.

As it became evident that many more users started to rely on containerlab for their daily work, the looming feeling of responsibility for the quality of the tool started to creep in. At the same time, the growing user base exposed us to many more feature requests and integrations, making it harder to find time to address technical debt and improve testing.

Given the nature of the project, it was clear that integration tests offer a quick way to validate the functionality, as we could replicate the user's workflow and verify the outcome. However, the integration tests are not without their own challenges, and one of them is the test coverage which is not as easy to get as with unit tests.

In this post, I will share how coverage enhancements introduced in Go 1.20 helped us to get the coverage for our integration tests and jump from a miserable 20% to a (less sad) 50%.

gNMIc talks at DKNOG and NANOG

If you have never heard of gNMI and/or gNMIc project, you can start with my last week's talk DKNOG

Thirty minutes introduction to gNMI and gNMIc to get you started.

After this taster, you will likely want to know more, and Karim Radhouani has you covered. A 1-hour gNMIc tutorial has recently been published directly from NANOG 87 stage.

After this one you'll never wanna see OIDs again.

SR Linux logging with ELK

Implementing centralized logging using modern log collectors is an interesting task even before you start solving scaling problems.

My colleague and I opened up a series of posts dedicated to logging in the context of datacenter networks. We started with the basics of SR Linux logging and used the famous ELK stack as our log storage/processing solution.

Integrating SR Linux logging with ELK via Syslog was fun, and we tried to capture every step of the way. Plus, we create a containerlab-based lab that anyone can use to test the solution themselves.

Dig into "SR Linux logging with ELK" and open up the world of modern logging.