Skip to content

Index#

SR OS Rootifier or how to flatten 7750 SR config

Back in the days when I mostly did routing stuff I spent the whole day configuring SROS devices via SSH. And once in a while I saw that SSH session or its server part (or even underlying connection) glitched, resulting in a corrupted lines feeded to the device.

What was also quite common is to make a mistake (i.e. syntax one) in a single line and watch like the rest of config got applied to the wrong context.

These sad facts pushed me to create a rootifier CLI script, that was converting tree-like SROS config into flattented (aka rooted) fashion.

rootifier

update 2023

The web service that was available publicly but has been now decommissioned due to the transition to MD-CLI.

Flask application in a production-ready container

Flask documentation is very clear on where is the place for its built-in WSGI application server:

Note

When running publicly rather than in development, you should not use the built-in development server (flask run). The development server is provided by Werkzeug for convenience, but is not designed to be particularly efficient, stable, or secure.

So how about I share with you a Dockerfile that will enable your Flask application to run properly and ready for production-like deployments? As a bonus, I will share my findings discovered along the way of building this container image.

nginx-uwsgi-flaks-alpine-docker

Waiting for SSH service to be ready with Paramiko

Today I faced a task which required first to establish an SSH tunnel in a background process and later use this tunnel for SSH connection. What seemed like a child's play first actually had some fun inside.

A problem were hidden right between the moment you spawned ssh process in the background and the next moment you tried to use this tunnel. In other words, it takes literally no time to spawn a process in the background, but without checking that tunnel is ready, you will quite likely receive an error, since your next instructions will be executed immediately after.

Consequently, I needed a way to ensure that the SSH service is ready before I try to consume it.

Changing Libvirt bridge attachment in a running domain aka on-the-fly

At work I always prefer KVM hosts for reasons such as flexible, free and GUI-less. Yet I never bothered to go deeper into the networking features of Libvirt, so I only connect VMs to the host networks via Linux Bridges or OvS. Far far away from fancy virtual libvirt networks.

Even with this simple networking approach I recently faced a tedious task of reconnecting VMs to different bridges on-the-fly.
My use case came from a need to connect a single traffic generator VM to the different access ports of virtual CPEs. Essentially this meant that I need to reconnect my traffic generator interfaces to different bridges back and forth:

Apparently there is no such virsh command that will allow you to change bridge attachments for networking devices, so a bit of bash-ing came just handy.

Installing xrdp 0.9.1 on Ubuntu 16.04 Xenial

xrdp is defacto the default RDP server for Linux systems sharing with VNC the remote access solution olympus. I personally found it more resource friendly and feature rich compared to VNC solutions I tried.

The only problem I found with xrdp is that current Ubuntu LTS release Xenial 16.04 has a way outdated 0.6.1-2 version of xrdp in the packages repo. This version has no shared clipboard support, which makes remote support/remote access a tedious task.

xrdp currently in its 0.9.3 version and it would be really nice to have a more recent package, rather than installing it from sources, like many solutions propose.

Well, no need to compile xrdp from sources (unless you want to), because you can leverage a ppa from hermlnx that has xrdp 0.9.1-7 already built for amd64 and i386 systems

# all you need is
sudo add-apt-repository ppa:hermlnx/xrdp
sudo apt-get update
sudo apt-get install xrdp

You can also try a deb package of xrdp 0.9.2 -- https://github.com/suminona/xrdp-ru-audio

How to install python3 in Amazon Linux AMI

While Amazon Linux AMI has yum as a package manager, it is not that all compatible with any RHEL or CentOS distributive. A lot of changes that AWS team brought into this image made it a separate distro, so no eyebrows should be given when battle-tested procedure to install python3 will fail on Amazon Linux. (Yeah, python3 does not come included yet in Amazon Linux)

Fortunately it is very easy to fetch (while not the latest release) python3:

# list available packages that have python3 in their name
yum list | grep python3

# install python3+pip, plus optionally packages to your taste
sudo yum install python35 python35-devel python35-pip python35-setuptools python35-virtualenv

# update pip3. optionally set a symbolic link to pip3
sudo pip-3.5 install --upgrade pip

And that is it!

Yang Explorer in a docker container based on Alpine

I wrote about the Yang Explorer in a docker quite some time ago, Yang Explorer was v0.6 at that time. Back then the motivation to create a docker image was pretty simple -- installation was a pain in v0.6, it is still a pain, but the official version bumped to 0.8(beta).

So I decided to re-build an image, now using Alpine Linux as a base image to reduce the size.

Setting up a Hugo blog with GitLab and CloudFlare

Hugo gets a lot of attention these days, it is basically snapping at the Jekyll' heels which is still the king of the hill! I don't know if Hugo' popularity coupled with the fastest static-site-generator statement, but for me "speed" is not the issue at all. A personal blog normally has few hundreds posts, not even close to thousands to be worried about slowness.

Then if it is not for speed then why did I choose Hugo? Because it became a solid product with a crowded community and all the common features available. (To be honest I also got an illusion that one day I might start sharpen my Go skills through Hugo as well).

As you already noticed, this blog is powered by Hugo, is running on GitLab pages, with SSL certificate from CloudFlare and costs me $0. And I would like to write down the key pieces that'll probably be of help on your path to a zero-cost personal blog/cv/landing/etc.

How to make VS Code Go extension to work in your cloud folder on different platforms?

I started to play with Go aka Golang. Yeah, leaving the comfort zone, all that buzz. And for quite some time I've been engaged with VS Code whenever/wherever I did dev activities.

VS Code has a solid Go support via its official extension:

Info

This extension adds rich language support for the Go language to VS Code, including:

  • Completion Lists (using gocode)
  • Signature Help (using gogetdoc or godef+godoc)
  • Quick Info (using gogetdoc or godef+godoc)
  • Goto Definition (using gogetdoc or godef+godoc)
  • Find References (using guru)
  • File outline (using go-outline)
  • Workspace symbol search (using go-symbols)
  • Rename (using gorename)
  • Build-on-save (using go build and go test)
  • Lint-on-save (using golint or gometalinter)
  • Format (using goreturns or goimports or gofmt)
  • Generate unit tests skeleton (using gotests)
  • Add Imports (using gopkgs)
  • Add/Remove Tags on struct fields (using gomodifytags)
  • Semantic/Syntactic error reporting as you type (using gotype-live)

Mark that gotools in the brackets, these ones are powering all that extra functionality and got installed into your GOPATH once you install them via VS Code.

And here you might face an issue if you want to use Go + VS Code both on Mac and Linux using the Dropbox folder (or any other syncing service). The issue is that binaries for Mac and Linux will overwrite themselves once you decide to install the extension on your second platform. Indeed, by default VS Code will fetch the source code of the tools and build them, placing binaries in the $GOPATH/bin.

How to count lines of code in a Git repo?

Nothing bad in knowing how many lines of code or text out there in your repo. You don't even need your VCS to convey this analytics. All you need is git, grep and wc.

# count lines in .py and .robot files in /nuage-cats dir of the repo
$ git ls-files nuage-cats/ | grep -E ".*(py|robot)" | xargs wc -l
       0 nuage-cats/robot_lib/__init__.py
     817 nuage-cats/robot_lib/lib/NuageQoS.py
     409 nuage-cats/robot_lib/lib/NuageVCIN.py
    1841 nuage-cats/robot_lib/lib/NuageVNS.py
    2964 nuage-cats/robot_lib/lib/NuageVSD.py
    # OMITTED
      26 nuage-cats/test_suites/0910_fail_bridges_kvm_vms/0910_fail_bridges_kvm_vms.robot
   13636 total