Once in a while, one still needs to get down to a VM-land and dust off some guestfish skills. Like today I got the IPInfusion OcNOS qcow2 image whose devs decided it is best to use VNC console by default. VNC console for a text-based terminal… So along come guestfish commands. It is hugely satisfying to modify the VM images using containers, so here are my two commands to modify GRUB settings.
Here comes the second episode of the NetRel show: NetRel episode 002 - DIY YANG Browser. Be ready to dive into the paths we took to create a YANG Browser for Nokia SR Linux platform. YANG data models are the map one should use when looking for their way to configure or retrieve any data on SR Linux system. A central role that is given to YANG in SR Linux demands a convenient interface to browse, search through, and process these data models.
Okay, here goes my first attempt fitting the shoes of a content creator. Please welcome the NetRel episode 001 - Decoding gNMI with Wireshark, it is a 35min journey of using Wireshark to parse the gNMI traffic (both non-secured and secured). I won’t spend your time explaining the first episode; instead, let me tell you what I want the NetRel series to be about. I am interested in covering the aspects of network automation that are not widely covered.
As the networking industry is (slowly) moving towards forklifting networking functions to the cloud-native space we often become the witnesses of mixing decade old tools with cloud-native approaches and architectures. This post is about one such crazy mixture of using screen scraping library scrapligo with kubectl exec and docker exec commands. What and Why?I can imagine that for some readers the previous sentence makes no sense, why do you need a screen scraping library when working with standalone containers or kubernetes workloads?
It’s been almost two years since Nokia announced its Data Center Fabric solution. The three-layered solution ranged from hardware platforms all the way up in the stack to the DC fabric lifecycle management suite - Fabric Services System (FSS). At the very heart of the DC Fabric solution lies a purpose-built, modern Network OS - SR Linux. SR Linux comes with quite some interesting and innovative ideas. By being able to design the NOS from the ground up, the product team was freed from the legacy burdens which will be there have they decided to built the NOS on top of the existing one.
Just recently the network automation folks witnessed a great library to be ported from Python to Go - scrapligo. Been working on learning go a bit and have published scrapligo https://t.co/NDXQ6khxCr -- still a work in progress, but has been a fun learning experience! Check it out and let me know what ya think! 🤠 — Carl Montanari (@carlrmontanari) May 19, 2021 For me personally this was a pivotal point because with scrapligo the Go-minded netengs can now automate their networks with a solid and performant library.
With the growing number of containerized Network Operating Systems (NOS) grows the demand to easily run them in the user-defined, versatile lab topologies. Unfortunately, container runtimes alone and tools like docker-compose are not a particularly good fit for that purpose, as they do not allow a user to easily create p2p connections between the containers. Containerlab provides a framework for orchestrating networking labs with containers. It starts the containers, builds a virtual wiring between them to create a topology of users choice and then manages a lab lifecycle.
I am a huge fan of a goreleaser tool that enables users to build Go projects and package/publish build artifacts in a fully automated and highly customizable way. We’ve have been using goreleaser with all our recent projects and we couldn’t be any happier since then. But once the artifacts are built and published, the next important step is to make them easily installable. Especially if you provide deb/rpm packages which are built with NFPM integration.
Lately I have been consumed by an idea of running container-based labs that span containerized NOSes, classical VM-based routers and regular containers with a single and uniform UX. Luckily the foundation was already there. With plajjan/vrnetlab you get a toolchain that cleverly packages qemu-based VMs inside the container packaging, and with networkop/docker-topo you can run, deploy and wire containers in meshed topologies. One particular thing though we needed to address, and it was the way we interconnect containers which host vrnetlab-created routers inside.
Running multiple VMs out of the same disk image is something we, network engineers, do quite often. A virtualized network usually consists of a few identical virtualized network elements that we interconnected with links making a topology. In the example above we have 7 virtualized routers in total, although we used only two VM images to create this topology (virtualized Nokia router and it’s Juniper vMX counterpart). Each of this VMs require some memory to run, for the simplicity, lets say each VM requires 5GB of RAM.