I wrote about the Yang Explorer in a docker quite some time ago, Yang Explorer was v0.6 at that time. Back then the motivation to create a docker image was pretty simple -- installation was a pain in v0.6, it is still a pain, but the official version bumped to 0.8(beta).
So I decided to re-build an image, now using Alpine Linux as a base image to reduce the size.
Hugo gets a lot of attention these days, it is basically snapping at the Jekyll' heels which is still the king of the hill! I don't know if Hugo' popularity coupled with the fastest static-site-generator statement, but for me "speed" is not the issue at all. A personal blog normally has few hundreds posts, not even close to thousands to be worried about slowness.
Then if it is not for speed then why did I choose Hugo? Because it became a solid product with a crowded community and all the common features available. (To be honest I also got an illusion that one day I might start sharpen my Go skills through Hugo as well).
As you already noticed, this blog is powered by Hugo, is running on GitLab pages, with SSL certificate from CloudFlare and costs me $0. And I would like to write down the key pieces that'll probably be of help on your path to a zero-cost personal blog/cv/landing/etc.
I started to play with Go aka Golang. Yeah, leaving the comfort zone, all that buzz. And for quite some time I've been engaged with VS Code whenever/wherever I did dev activities.
VS Code has a solid Go support via its official extension:
Info
This extension adds rich language support for the Go language to VS Code, including:
Completion Lists (using gocode)
Signature Help (using gogetdoc or godef+godoc)
Quick Info (using gogetdoc or godef+godoc)
Goto Definition (using gogetdoc or godef+godoc)
Find References (using guru)
File outline (using go-outline)
Workspace symbol search (using go-symbols)
Rename (using gorename)
Build-on-save (using go build and go test)
Lint-on-save (using golint or gometalinter)
Format (using goreturns or goimports or gofmt)
Generate unit tests skeleton (using gotests)
Add Imports (using gopkgs)
Add/Remove Tags on struct fields (using gomodifytags)
Semantic/Syntactic error reporting as you type (using gotype-live)
Mark that gotools in the brackets, these ones are powering all that extra functionality and got installed into your GOPATH once you install them via VS Code.
And here you might face an issue if you want to use Go + VS Code both on Mac and Linux using the Dropbox folder (or any other syncing service). The issue is that binaries for Mac and Linux will overwrite themselves once you decide to install the extension on your second platform. Indeed, by default VS Code will fetch the source code of the tools and build them, placing binaries in the $GOPATH/bin.
Nothing bad in knowing how many lines of code or text out there in your repo. You don't even need your VCS to convey this analytics. All you need is git, grep and wc.
# count lines in .py and .robot files in /nuage-cats dir of the repo$gitls-filesnuage-cats/|grep-E".*(py|robot)"|xargswc-l
0nuage-cats/robot_lib/__init__.py
817nuage-cats/robot_lib/lib/NuageQoS.py
409nuage-cats/robot_lib/lib/NuageVCIN.py
1841nuage-cats/robot_lib/lib/NuageVNS.py
2964nuage-cats/robot_lib/lib/NuageVSD.py
# OMITTED26nuage-cats/test_suites/0910_fail_bridges_kvm_vms/0910_fail_bridges_kvm_vms.robot
13636total
Cloud-native revolution pointed out the fact that the microservice is the new building block and your best friends now are Containers, AWS, GCE, Openshift, Kubernetes, you-name-it. But suddenly micro became not that granular enough and people started talking about serverless functions!
When I decided to step in the serverless property I chose AWS Lambda as my instrument of choice. As for experimental subject, I picked up one of my existing projects - a script that tracks new documentation releases for Nokia IP/SDN products (which in the past I aggregated at nokdoc.github.io (now closed)).
Given that not so many posts are going deeper than onboarding a simplest function, I decided to write down the key pieces I needed to uncover to push a real code to the Lambda.
Buckle up, our agenda is fascinating:
testing basic Lambda onboarding process powered by Serverless framework
accessing files in AWS S3 from within our Lambda with boto3 package and custom AWS IAM role
packaging non-standard python modules for our Lambda
exploring ways to provision shared code for Lambdas
and using path variables to branch out the code in Lambda
virsh is a goto console utility for managing Qemu/KVM virtual machines. But when it comes to deletion of the VMs you better keep calm - there is no single command to destroy the VM, its definition XML file and disk image.
Probably not a big problem if you have a long-living VMs, but if you in a testing environment it is naturally to spawn and kill VMs quite often. Lets see how xargs can help us with that routine.
It may very well be that VPLS days are numbered and EVPN is to blame. Nevertheless, it would be naive to expect VPLS extinction in the near future. With all its shortcomings VPLS is still very well standardized, interop-proven and has a huge footprint in MPLS networks of various scale.
In this post I will cover theory and configuration parts for one particular flavor of VPLS signalling - BGP VPLS (aka Kompella VPLS) defined in RFC4761. I'll start with simple single home VPLS scenario while multi-homing techniques and some advanced configurations might appear in a separate post later.
In this topic the following SW releases were used:
Recently I revived my relationship with Python in an effort to tackle the routine tasks appearing here and there. So I started to write some pocket scripts and, luckily, was not the only one on this battlefield - my colleagues also have a bunch of useful scripts. With all those code snippets sent in the emails, cloned from the repos, grabbed on the network shares... I started to wonder how much easier would it be if someone had them all aggregated and presented with a Web UI for a shared access?
Thus, I started to build web front-end to the python scripts we used daily with these goals in mind:
allow people with a zero knowledge of Python to use the scripts by interacting with them through a simple Web UI;
make script's output more readable by leveraging CSS and HTML formatting;
aggregate all the scripts in one a single repo but in a separate sandboxed directories to maintain code manageability.
This short demo should give you some taste of what it is:
Disclaimer: I am nowhere near even a professional python or web developer. And what makes it even worse is that I used (a lot) a very dangerous coding paradigm - SDD - Stack Overflow Driven Development. So, hurt me plenty if you see some awful mistakes.
The topic of this post is Layer 3 VPN (L3VPN or VPRN as we call it in SROS) configuration, and I decided to kill two birds with one stone by inviting Juniper vMX to our cozy SROS environment.
The BGP/MPLS VPN (RFC 4364) configuration will undergo the following milestones:
PE-PE relationship configuration with VPN IPv4 address family introduction
PE-CE routing configuration with both BGP and OSPF as routing protocols
Export policy configuration for advertising VPN routes on PE routers
AS override configuration
and many more
We'll wrap it up with the Control Plane/Data Plane evaluation diagrams which help a lot with understanding the whole BGP VPN mechanics. Take your seats, and buckle up!
In the first part of this BGP tutorial we prepared the ground by configuring eBGP/iBGP peering. We did a good job overall, yet the plain BGP peering is not something you would not normally see in production. The power of BGP is in its ability for granular management of multiple routes from multiple sources. And the tools that help BGP to handle this complex task are BGP policies at their full glory.
In this part we will discuss and practice:
BGP export/import policies for route advertisement/filtering
BGP communities operations
BGP routes aggregation: route summarization and the corresponding aggregate and atomic-aggregate path attributes