At work there's a python coding convention that I tend to overlook a
lot. So when I post merge requests, there's a pretty good chance
someone's going to call me out on this, which leads to a followup
commit and another round of peer review. This can lead to an extra delay
of a few hours until I notice the comments, switch context back to that
merge request, making the changes, update the merge request and wait for
another round of reviews. If I could find a way to check my code for
this convention before posting the merge requsts, I could get my code
merged in a few hours faster.…
The Convention
The coding convention I cannot internalize is as follows: In python, the
format method for strings will call the __format__ method on its
arguments for you, so any code that looks like:
"interpolate these: {}{}".format(str(a),str(b))
Need only look like:
"interpolate me: {}{}".format(a,b)
The Pylint Plugin
So googling around led my to this Ned Batchelder
post
from a few years back. That post also led to a couple pylint plugins
here.
Looking at pylint's own format
checker
reminded me that I should also be handling keyword arguments.
From the post and sample code, it looked like I needed to define a
checker class with a visit_callfunc method that would check when the
'format' method was used, and then check all the arguments to the
format call and throw an error if any of them where a function call to
str().
To come up with this I used an embarassing amount of exploratory
programming to figure out
astroid. I wrote an initial visit_callfunc() method based on the
sample code that didn't do much more than dump out all the data about
the node argument via dir(node) and node.__dict__. Then I would call
pylint with the plugin against some sample source with the error I was
trying to plugin to report.
I run the plugin against the existing code and found one lingering case
where the reviewers had allowed one of my unneccessary str() call into
the codebase. It's been removed now.
Lessons Learned
pylint plugins are pretty powerful and I wouldn't shy away from
writing another one. I'm on the lookout for other excuses to write
another one.
I use Ubuntu (16.04 for now) and Vagrant (1.9.0 for now) on a bunch of
my projects, and I've been running into somethinng like this [power
management bug]{.title-ref} for a while now, where after restoring from
suspension, my vagrant sessions would be dead and I'd have to 'vagrant
halt' and 'vagrant up' before another 'vagrant ssh' would succeed.
To work around this, I came up with some /etc/pm/sleep.d scripts which
would save any running vagrant boxes when suspending the workstation and
then resume the VMs when resuming the workstation.
Now if I'm in a 'vagrant ssh' session and Ubuntu suspends/resumes,
instead of coming back to a frozen session, I'll see I've been
disconnected from the ssh session, and I can do another 'vagrant ssh'
without having to halt/re-up the VM. That's better than nothing, but
the next step here is to start using something like screen or tmux in my
vagrant sessions so I can restore right back to where I left off.
So why bother with two scripts when you could have 1 script with a
single case statement ? I wanted saving the running vagrant boxes to
happen when all the usual services and userspace infrastructure was
still running, so I wanted that script in the 00 -49 range from as per
the 'Sleep Hook Ordering Convention' portion of 'man 8 pm-action`.
However I don't restoration to happen until all the services restarted,
so I pushed to the end of the service handling hook range. I may want to
revisit this, and rename it to 75_vagrant.
Note in the resume script, the command is pushed into the background
since I didn't want want to wait for the VMs to be restored before
resuming Ubuntu. I'm usually checking email or the web for a bit before
going back to my VMs so I'm OK if that's ready immediately.
Here are some other lessons I learned from these scripts:
Here are some issues I ran into installing NixOS
and how I eventually got around them.
Setting up a static IP since DHCP wasn't available.
My VM was hosted in an oVirt cluster where DHCP wasn't
working/configured, so the installation CD booted without a network.
Here's how I manually configred a static IP:
I spent a lot of time messing with various partitioning schemes until I
stumbled across one that worked. I didn't need disk encryption, and I
didn't want to bother trying UEFI with
ovirt, so here's
what I ended up with.
A 20G disk split into /dev/sda1 and /dev/sda2
/dev/sda1 is a 400MB 'WIN VFAT32' partition (type 'b', not type
'4' !!)
/dev/sda2 is a LVM partition with the rest of the space
For the LVM, /dev/vg/swap is an 8G swap partition and /dev/vg/root
has the rest of the LVM parition
In retrospect, I think a lot of my partitioning pain may have been
caused by trying to have /dev/sda1 set as a BIOS Parition (type '4'),
since I suspect the BIOS partition has to be under 32M.
Also in retrospect, I see only 23M is actually used on the current /boot
parition, so maybe 400MB was way too much and I should have gone with
/dev/sda1 being 32M and type '4'. ¯\_(ツ)_/¯
I think I also ran into problems using fsck on the boot partition
instead of fsck.vfat.
When the boot partition wasn't working, grub would fall into rescue
mode and the various 'set prefix / set root / insmod' fixes like this
one
or this other
one
didn't work.
What did work here was booting the system with the install CD again,
mounting /mnt/boot manually and seeing that failed, or that /mnt/boot
contained gibberish after mounting, and then unmounting /mnt/boot and
using testdisk to fix the
partition type. Testdisk really saved the day.
Mounting the boot partition
Before running nixos-install, I had to also mount the boot partition
under /mnt/boot:
> mount /dev/vg/root /mnt
> mkdir -p /mnt/boot
> mount /dev/sda1 /mnt/boot
> nixos-install
Verify the /mnt/etc/nixos/hardware-configuration.nix device paths
When I was messing with the disk partitioning, I rebuilt the /dev/sda1
partition a couple times. Apparently when you do that, you get new UUID
for the device.
This meant the "/boot" file system in
/mnt/etc/nixos/hardware-configuration.nix was using a device path that
was no longer valid. I updated the file to point to the current /boot
device and reran 'nixos-install'.
It looks like nixos-install isn't verifying the device paths are valid,
since nixos-install ran OK with the invalid device paths.
Configuring a static IP in /mnt/etc/nixos/configuration.nix
Here's what I ended up adding to the configuration.nix file to set up
static IP:
Heat the garlic and onion until the onion is clear.
Add garlic, onion, and spices to the turkey.
Heat the rest of the veggies in a separate pan.
When the veggies are tender, add half of them to the ground turkey.
Put the otiher half in a blender and blend them into a sauce.
Add the veggie sauce to the ground turkey.
Adjust sauciness by adding a cup or 2 or water to the ground turkey.
Continue heating until you're convinced the turkey is finished
cooking.
Salt & pepper to taste.
I only blended half the veggies because the pan I was using was too
small to cook all the veggies at once, but it turned out really nice,
especially with some chipotle hot sauce.
I'm started occassional series about lessons I've learned after
finishing with a project. The kick-off article is about a
certmaster fork I've been
working on. I've been using certmaster at $work for a few years now,
but when we wanted to start using multiple certificate authorities, we
had to spin up different instances of certmaster, with each instance
using on its own IP/port. It would be better if a single instance of
certmaster could serve multiple CA's by adding a '--ca' flag. This
is the functionality that my fork of
certmaster provides,
and here are the lessons I learned while working on this:
bats versus shunit2
certmaster doesn't include any tests, so I wanted to write some
functional tests to verify my changes worked as expected.
I started out working with [bats_]{.title-ref}, but it fell down when I
needed to push a command into the background - it just wouldn't do it.
I tried the 'gotcha' suggestions from this engine yard
post but
to no avail. I switched the tests to
shunit2 and had no trouble pushing
commands into the background.
Assigning here documents to variables
variable=$(cat <<EOF
this text will get assigned to variable.
EOF
I've found using Lua macros in rpm spec files to be pretty useful. I
haven't found many examples of their use online, so here's how I ended
up using them.
I had a situation where I needed to make a subpackage from all the files
in a number of different subdirectories. The structure of the files in
the subdirectories was fixed, but the number of subdirectories could
change over time, and I didn't want to have to update the spec file
each time a new subdirectory was added or removed.
I've started backing up one of my systems to
S3.
The instructions from the Phusion blog worked almost perfectly, except
my TARGET line was
TARGET='s3+http://<my-bucket-name>'
Also on the AWS side, I set up a lifecycle rule to archive the backups
to Glacier after 7 days.
I did run into some issues getting the backups to work together with
powernap, which was configured to put
the system to sleep after a few minutes of inactivity.
Powernap was causing a problem on two fronts. First, the system was
going to sleep mid-backup since full backups take longer than the
powernap inactivity timeout. Second, the backups were scheduled for the
middle of the night when the system would normally already be asleep.
To get around the mid-backup sleep issue, I made a
/usr/local/bin/duply-nightly script which shuts down powernap before
calling duply and restarted it afterwards.
To get around the system-already-asleep issue, I'm using an RTC
wakeup
in /usr/local/bin/duply-nightly to set the system to wake a few minutes
before the cron job kicks off (but not early enough for powernap to put
the system to bed again...)
The first night I ran the backup, I had to prime the
/sys/class/rtc/rtc0/wakalarm time manually, but since then the script
has set the wakeup time for the next day
The final /usr/local/bin/duply-nightly script is below
#!/bin/sh +x
/usr/bin/logger "Running nightly backup from $0"# Disable powernap during the backupservice powernap stop
/usr/bin/duply nightly backup
# Wakeup the system at 3:00am tomorrowecho0 > /sys/class/rtc/rtc0/wakealarm
echo`date '+%s' -d '3am next day'` > /sys/class/rtc/rtc0/wakealarm
# Enable powernap again.service powernap start
The cron job that kicks on /usr/local/bin/duply-nightly is below
I spent way too long the other day thrying to figure out why some Hiera
variable wasn't available in some puppet manifest.
It turns out there was a typo in the YAML file where the first line of
the file only had two dashes instead of three.
In this cases, the Ruby 1.8.7 yaml parser corrupts the first entry in
the yaml file, adding the two dashes to the beginning of the key,
instead of throwing a parser error.
I couldn't find an existing bug for this, but I didn't look to harrd
since this has been fixed in Ruby 1.9
irb(main):001:0>require'yaml'=>trueirb(main):002:0>a=YAML.load_file("only-two-dashes.yaml")Psych::SyntaxError:(t1.yaml):couldn't parse YAML at line 1 column 1
from /usr/lib/ruby/1.9.1/psych.rb:154:in `parse'from/usr/lib/ruby/1.9.1/psych.rb:154:in`parse_stream'
from /usr/lib/ruby/1.9.1/psych.rb:125:in `parse'
from /usr/lib/ruby/1.9.1/psych.rb:112:in `load'from/usr/lib/ruby/1.9.1/psych.rb:229:in`load_file'
from (irb):2
from /usr/bin/irb1.9.1:12:in `<main>'.
So if you're using Ruby 1.8.7, and it looks like the first item in your
YAML file is being dropped for some reason, check the first line of the
file has 3 dashes.
I've implementing a version of Daniel Patterson's Hacker's GMail
Replacement,
and finally got around to publishing it as a Puppet
module. Even if you don't
know puppet, it shouldn't be too hard to walk through the manifests and
see what setting it up involved.
I don't live inside Emacs as much as Mr Patterson apparently does, so I
left out the notmuch and afew content. Instead I added
radicale support for publishing my calendar and
todo lists, and I've started moving towards setting up LDAP for my
address book. That may be overkill - maintaining an LDAP address book
for personal use looks like a PITA.
So far, it's working well with my phone using
K9 for mail and
aCal for a calender. On the non-phones,
Thunderbird/Lightning is working out well.
I've set up gmail to forward to my non-gmail account. Pretty much the
only time I log into gmail (or google for that matter) in the past
couple months has been to delete content or wipe profile data.
It was an interesting exercise. The next steps for this project would be
to update the manifests to follow the Puppet Forge documentation
standards, and to include some rspec-puppet tests, and then to maybe
like into setting up Packer to build a VM that
applies the module.
Here are the steps I've taken recently to cut down on the amount of
free data I've been passing on to Google, and by proxy any unsupervised
NSA contractors who may be running amok. I've taken to calling this
project 'Deooglization'.
Uninstalled Chrome , switched back to Firefox for daily browsing
Shuttered the G+ account associated with my identity
Migrated away from gmail
Implemented most of Daniel Patterson's Hacker's Replacement
for
GMail
(minus notmuch since I don't live in Emacs that deeply)
On StackOveflow, I stopped using Google for an OpenID source and
switched to
openid.stackexchange.com
Removed as many Google apps from my phone as I could.
Outside of Google, I also shut down the Flickr account associated with
my identity. I haven't had a Facebook in ages so there wsa nothing to
shut down there. I haven't shut down my Twitter account yet, but it's
just a matter of time, I suppose.
I've also found that diversifying my
passwords
has helped a lot: in a few cases where I'd gone to google out of habit,
it's been such a PITA to retrieve my password that I just do something
else, like take a walk or catch up on the laundry.
Next steps: setting up [a Dropbox replacement]{.title-ref}.