Network Manager and OpenVPN

It blows my mind that Network Manager is still as bad as ever, I just finished up getting my new phone aimed at the home VPN when I remembered that the laptop lost all the old settings in my switch to Fedora so I figured I would give it a spin and see if somehow NM had been fixed.  A few minutes and some profanity later and it seems it STILL is unable to properly load up .ovpn profiles and parse out the various bits into the fields they need to go.  Even when I manually split up the keys and certs and all that it only worked halfway, I could connect to the VPN but was unable to browse the internet over it or even access resources local to the VPN server itself.  Fortunately the command line comes to the rescue again, all I had to do was tell openvpn itself where the config was and it did all the legwork that the abomination known as Network Manager failed to do.  For those who might care the proper way to invoke it is as follows

Now I just have to make a handy way to suppress the output, give me a status indicator and kill off the connection when I am done with it…

Successful Upgrade is Successful

I would say I can’t believe I’m typing this from a successful full upgrade from Fedora 23 to 24 but I’m not since I am at work and they frown upon me pecking away on my personal laptop, but I am still amazed that it was an absolutely painless process to upgrade from 23 to 24 with dnf.  In prior years it was almost always advisable to reinstall rather than attempt an upgrade from one major release to the next but the fine folks over at Fedora seem to have hit a home run on this one.  Sure it took a while to apply everything but the moment of truth (or reboot) came and passed and all I got was my normal login screen, no fancy explosions of failed video drivers, no corrupted profiles or missing files; it went so smooth I almost didn’t think it worked until I checked the redhat-release file and verified that it was in fact on the 24 release.

Crontab – Always Check your Environment Variables

So I have been running into this issue for like a month now where a script that I can run from the command line by hand executes fine, but when I try to run it via a crontab job it just goes absolutely pear shaped without any real explanation.  Finally I got some time at the beginning of a shift to sit with one of our senior guys to take a look at it as the script provides data the entire team uses and when it doesn’t run they get cranky.  It turns out that the environment my cron jobs run as is highly different, as indicated by the following which is obtained by adding adding a line to output env to a text file every time the crontab job ran.

Compare that against the results from env when run by hand

Notice the path statement is very sparse when cron outputs the environment variables, turns out that anemic path lacked access to fping which was integral to my script building out a list of live hosts within our lab environment. Once that was fixed the cron jobs hum along nicely and churn out an updated map of the lab every hour without me doing anything and now I know that crontabjobs run with fairly different environment variables than scripts manually ran and can cause all kinds of havoc if you don’t use full explicit path statements in your bash scripts that you plan to automate.

scripting: system-help

We have this handy script at work that pulls all kinds of useful details from a system and saves us a ton of time checking by hand, so I took a stab at making my own version for generic use. Its not very good at all but it kinda works and probably could be expanded upon to do something actually useful.

Repo on Github