An Exploration into Jinja2 Unit Tests Wth Pytest

Posted on Mon 20 November 2023 in lessons • Tagged with lessons, jinja2, pytest, salt

This post covers an approach I've used for add pytest-style unit tests to my Salt-related projects.

The content below may be specific to Salt, but the testing techniques should work on any project using Jinja2.

Project Directory Structure

In my existing salt project repo, I added a tests subdirectory, a setup.cfg with the contents below, and added a test target to the project's Makefile.

I also installed the pytest and jinja2 eggs in a virtualenv in my working directory.

├─ test_project repo
├─── .git
├─── init.sls
├─── map.jinja
├─── templates
├─────── some_template.cfg
├─── tests
├─── setup.cfg
├─── Makefile
├─── env
├─────── ... pytest
├─────── ... jinja2

Here's a snippet of the Makefile for kicking off the tests:

Note that since the tests are python code, you should run them through whatever linters and style-checkers you're using on the rest of your python code.

    pycoderstyle ./tests/*.py

And here's the setup.cfg:

Without the extra '--tb=native' argument, pytest would sometimes through an internal error when jinja ended up throwing exception, as we'll see later below.

python_files = test_*.py tests/ tests/*/
#uncomment the line below for full unittest diffs
addopts =
        # Any --tb option except native (including no --tb option) throws an internal pytest exception
        # jinja exceptions are thrown
        # Uncomment the next line for verbose output
        # -vv


Note there is a test_*.py file for each file that includes Jinja2 markup.


The contains the common fixtures used by the tests. I've tried adding docstring comments to explain how to use the fixtures, but also see the examples.

import pytest
from unittest.mock import Mock

import jinja2
from jinja2 import Environment, FileSystemLoader, ChoiceLoader, DictLoader, StrictUndefined

class RaiseException(Exception):
    """ Exception raised when using raise() in the mocked Jinja2 context"""

def mocked_templates(request):
    """ A dict of template names to template content.
        Use this to mock out Jinja 'import "template" as tmpl" lines.
    mocked_templates = {}
    return mocked_templates

def jinja_env(request, mocked_templates):
    """  Provide a Jinja2 environment for loading templates.
         The ChoiceLoader will first check when mocking any 'import' style templates,
         Then the FileSystemLoader will check the local file system for templates.

         The DictLoader is first so the Jinja won't end up using the FileSystemLoader for
         templates pulled in with an import statement which doesn't include the 'with context'

         Setting undefined to StrictUndefined throws exceptions when the templates use undefined variables.


    env = Environment(loader=test_loader,
                      extensions=['', 'jinja2.ext.with_', 'jinja2.ext.loopcontrols'])

    return env

@pytest.fixture(scope='session', autouse=True)
def salt_context():
    """ Provide a set of common mocked keys.
        Currently this is only the 'raise' key for mocking out the raise() calls in the templates,
        and an empty 'salt' dict for adding salt-specific mocks.

    def mocked_raise(err):
        raise RaiseException(err)

    context = {
        'raise': mocked_raise,
        'salt': {}

    return context


For purposes of the sections below, here's what the init.sls looks like:

# {% set version = salt['pillar.get']('version', 'latest') %}
# version: {{ version }}

# {% if version == 'nope' %}
#     {{ raise("OH NO YOU DIDN'T") }}
# {% endif %}

Mock out the Jinja Context

Let's test out that rendering init.sls should return a version key with some value.

Being able to mock out the salt pillar.get() function was a big breakthrough with respect to being able to write any sort of unit tests for the Salt states.

  def poc_context(self, salt_context):
      """ Provide a proof-of-concept context for mocking out salt[function](args) calls """
      poc_context = salt_context.copy()

      def mocked_pillar_get(key,default):
          """ Mocked salt['pillar.get'] function """
          pillar_data = {
              'version' : '1234'
          return pillar_data.get(key, default)

      # This is the super sauce:
      # We can mock out the ``salt['function'](args)`` calls in the salt states by
      # defining a 'salt' dict in the context, who's keys are the functions, and the values of mocked functions
      poc_context['salt']['pillar.get'] = mocked_pillar_get

      return poc_context

def test_jinja_template_poc(self, jinja_env, poc_context):
    """ Render a template and check it has the expected content """

    # This assumes the tests are run from the root of the project.
    # The file is setting the jinja_env to look for files under the 'latest' directory
    template = jinja_env.get_template('init.sls')

    # Return a string of the rendered template.
    result = template.render(poc_context)

    # Now we can run assertions on the returned rendered template.
    assert "version: 1234" in result

Mocking a raise() error

Now, let's see how we can test triggering the raise() error based on the pillar data:

def bad_context(self, salt_context):
    """ Lets see what happens if the template triggers a raise() """

    # The base salt_context from includes a 'raise' entry that raises a RaiseException
    bad_context = salt_context.copy()
    bad_context['salt']['pillar.get'] = lambda k, d: 'nope'
    return bad_context

def test_raise_poc(self, jinja_env, bad_context):
    """ Try rendering a template that should fail with some raise() exception """

    with pytest.raises(RaiseException) as exc_info:
        template = jinja_env.get_template('init.sls')
        result = template.render(bad_context)

    raised_exception = exc_info.value
    assert str(raised_exception) == "OH NO YOU DIDN'T"

Mocking imported templates

Sometimes the Jinja templates may try import other templates which are either out of scope with respect to the current project, or the import doesn't include the 'with context' modifier, so the Jinja context isn't available when rendering the template.

In this case we can used with DictLoader portion of the jinja_env to mock out importing the template.

In this example, lets assume the following template file exists in the templates directory:

{%- import 'missing.tmpl' as missing -%}
Can we mock out missing/out of scope imports ?

Mocked: {{ missing.perhaps }}
Macro Call: {{ missing.lost('forever' }}

Now here is a test that can mock out the missing.tmpl contents, including the lost() macro call:

def test_missing_template(self, jinja_env, mocked_templates, salt_context):
    In this example, templates/missing-import.tmpl tries to import a non-available 'missing.tmpl' template.
    The ChoiceLoader checks DictLoader loader, which checks mocked_templates and finds a match

    mocked_templates['missing.tmpl'] = """
       {% set perhaps="YES" %}
       {% macro lost(input) %}MOCKED_LOST{% endmacro %}
    missing_template = jinja_env.get_template('templates/missing-import.tmpl')
    missing_result = missing_template.render(salt_context)
    assert "Mocked: YES" in missing_result
    assert "Macro Call: MOCKED_LOST" in missing_result

Mocking a macro call

Let's say I have a Jinja2 macro defined below:


# {% macro test_macro(input) %}
#   {% if input == 'nope' %}
#     {{ raise("UNACCEPTABLE") }}
#   {% endif %}
#   {% set version =  salt['pillar.get']('version', 'latest') %}
"macro sez {{ input }}":
  - text: "{{ input }}"

"version sez {{ version }}":
  - text: "{{ version }}"

# {% endmacro %}

Testing out this macro is a little more involved, since first we have to include calls to the macro after rendering the template. Note we're reusing the poc_context fixture defined earlier so the pillar.get() call is still mocked out to return 1234 for the version"

def test_get_pillar_from_macro(self, jinja_env, poc_context):
    If we want to reference the mocked context in the macros, we need
    to render the source + macro call within a context.

    # The '[0]' is because get source returns a (source,filename,up-to-date) tuple.
    template_source = jinja_env.loader.get_source(jinja_env, 'macro.sls')[0]
    new_template = jinja_env.from_string(template_source + "{{ test_macro('hello') }}")
    result = new_template.render(poc_context)

    assert "macro sez hello" in result
    assert "version sez 1234" in result

It's also possible to check that the macro raises an error based on the input:

def test_raise_from_macro(self, jinja_env, salt_context):
    In this test, try forcing a raise() from within a macro

    with pytest.raises(RaiseException) as exc_info:
         template_source = jinja_env.loader.get_source(jinja_env, 'macro.sls')[0]
         new_template = jinja_env.from_string(template_source + "{{ test_macro('nope') }}")
         result = new_template.render(salt_context)

    raised_exception = exc_info.value
    assert str(raised_exception) == "UNACCEPTABLE"

FECUNDITY: Checking for undefined variables during template rendering

Back in the day I learned that one of the virtues of a scientific theory was 'fecundity', or the ability for the theory to predict new behavior the original theory hadn't considered.

It looks like this may be called fruitfulness now, but still whenever I stumble across something like this, I shout out 'FECUNDITY' internally to myself. :shrug:

While I was working on this project, I noticed the jinja Environment constructor has an undefined argument that defaulted to Undefined. I also noticed StrictUndefined was another value that the undefined argument could use.

It would be useful if the tests could throw exceptions when they ran into undefined variables. This could happen from typos in the templates, or possibly not mocking out all the globals variables used in a template.

So I tried making an jinja Environment with undefined=StrictUndefined, and wrote a test with template with a typo in a variable name to see if the test would raise an exception, and it did !

This is much more useful than the default jinja behavior where Jinja would give the typo a value of None and would likely surface in the output as a empty string.

It's also more useful than setting undefined to DebugUndefined, which sometimes raised an exception, but sometimes left the un-modified '{{ whatever }}' strings in the rendered templates. Bleh.

Here's the sample template I used, called unexpected_variable.sls. It's the same as the original init.sls, but with a 'verion' typo:


# {% set version = salt['pillar.get']('role:echo-polycom:version', 'latest') %}
# version: {{ version }}

# {% if verion == 'nope' %}
#     {{ raise("OH NO YOU DIDN'T") }}
# {% endif %}

And let's try adding this test, which is the same as the earlier test_jinja_template_poc() test, but with the buggy template:

def test_unexpected_variable(self, jinja_env, poc_context):
    """ Render a template and check it has the expected content """

    # This assumes the tests are run from the root of the project.
    # The file is setting the jinja_env to look for files under the 'latest' directory
    template = jinja_env.get_template('unexpected_variable.sls')

    # Return a string of the rendered template.
    result = template.render(poc_context)

    # Now we can run assertions on the returned rendered template.
    assert "version: 1234" in result

This test will fail with an undefined error exception below ! Cool. I can fix the typo, and rerun the test to get it passing again ! FECUNDITY !

==================================================== FAILURES =======================================================
_________________________________________ TestJinja.test_unexpected_variable __________________________________________
Traceback (most recent call last):
  File "/my/working/dir/", line 150, in test_unexpected_variable
    result = template.render(poc_context)
  File "/usr/lib/python3.6/site-packages/jinja2/", line 76, in render
     return original_render(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/jinja2/", line 1008, in render
     return self.environment.handle_exception(exc_info, True)
  File "/usr/lib/python3.6/site-packages/jinja2/", line 780, in handle_exception
     reraise(exc_type, exc_value, tb)
   File "/usr/lib/python3.6/site-packages/jinja2/", line 37, in reraise
     raise value.with_traceback(tb)
   File "unexpected_variable.sls", line 6, in top-level template code
     # {% if verion == 'nope' %}
 jinja2.exceptions.UndefinedError: 'verion' is undefined
========================================= 1 failed, 5 passed in 0.89 seconds ==========================================

Running the tests

The tests are kicked off via 'pytest' like any other python project using pytest.

workstation:~/projects/test_project.git# source ./env/bin/activate
(env) workstation:~/projects/test_project.git# pytest
===================================================================== test session starts =====================================================================
platform linux -- Python 3.6.8, pytest-2.9.2, py-1.4.32, pluggy-0.3.1
rootdir: /vagrant, inifile:
plugins: catchlog-1.2.2
collected 5 items

latest/tests/ .....


I based this work on some ideas from the blog post A method of unit testing Jinja2 templates by alexharv074.

TS Stands For Test Station

Posted on Sat 17 December 2022 in lessons • Tagged with now-you-know, infrastructure, test-stations

Every now and then someone will come along and spraypaint yellow "TS"'s on the sidewalks around the neighorhood with arrows next to them. The arrows they lead to little square metal covers with a hole in the middle.

From 99% Invisible, I figured it had something to do with the gas line since it was yellow, and that they was probably some sort of access point under the covers, but I couldn't figure out why they were using 'TS' instead something like 'GL' or 'GAS'.

Recently I found one of the TS's pointing to a more informative cover:

A yellow spraypainted 'TS' pointing to a metal cover including the text 'Test Station'.

Apparently it's the test station for a Cathodic Protection System.

Podman/Testinfra/Salt Lessons Learned

Posted on Sun 13 November 2022 in lessons • Tagged with lessons, podman, testinfra, salt

Oh my, less than two years between posts. I'm on a roll !

I've been looking into using Podman and Testinfra to test Salt states.

I'd like to add some unit tests and a Containerfile to an existing repo of salt states, where running 'pytest' in the repo's workspace would spin up the container and run the tests against it, and then tear down the container.

The tests would run 'salt state.apply' commands against the container, applying different sets of pillar data depending on the test.

Project Directory Structure

First let's set up a directory structure for the project that includes the states, their tests, and any needed test data. In the case of salt states, the test dtaa will be pillar files and files served by ext_pillar. The directory structure below is what I ended up using:

├─ test_project repo
├─── .git
├─── env
├────── ... testinfra egg
├─── Containerfile
├─── setup.cfg
├─── tests
├───── test_*.py
├───── data
├──────── ext_pillar
├──────── pillar
├────────── top.sls
├────────── test_zero.sls
├────────── test_one.sls
├────────── ...
├──────── top.sls
├─── test_project
├───── *.sls
├───── *.jinja
├───── templates
├──────── *.jinja
├───── files
├───── ...

Assuming all these files are stored in git, there's a .git directory from when you cloned the repo

The 'env' directory is a python virtualenv under 'env', where the testinfra egg has been installed. You can skip the virtualenv if you're pulling in testinfra from a global package.

Containerfile is, well a Podman Containerfile, and setup.cfg contains some pytest-specific settings.

The tests directory is where the testinfra test_*.py files are stored.

The tests/data/pillar directory will end up be mapped to the /srv/pillar directory in the test container. Similarly tests/data/ext_pillar will be mapped to /srv/ext_pillar.

The salt-states directory includes the *.sls and *.jinja files, and any other salt-related subdirectories like 'templates', 'files', 'macros', etc. This directory will be mapped to /srv/salt/project in the container.


The Containerfile I'm using for this project is below.

# I'm using Ubuntu 20.4 for this project-under-test so pull in the stock Ubuntu image for that version
FROM ubuntu:focal
RUN apt-get update

# The stock image doesn't include curl, so install it and bootstrap salt
# Longterm, I would host the bootstrapping script internally in case that site disappeared.
RUN apt-get install -y curl
RUN curl -L | sh -s --

# Configure salt run as a masterless minion
RUN echo "file_client: local" > /etc/salt/minion.d/masterless.conf
RUN printf "local" > /etc/salt/minion_id

# Set up the /srv/salt environment
RUN mkdir -p /srv/salt
RUN mkdir -p /srv/ext_pillar/hosts/local/files
RUN printf "ext_pillar:\n  - file_tree:\n      root_dir: /srv/ext_pillar\n" >>  /etc/salt/minion.d/masterless.conf

# Delay setting up /srv/salt/top.sls until the container starts. so PROJECT can be sent in as a ENV
RUN printf "printf \"base:\\n    '*':\\n      - \${PROJECT}\\n\" > /srv/salt/top.sls" >> /root/.bashrc

# Create a local user
RUN useradd local_user

# The Salt git states apparently assume git is already installed on the host, so install it.
RUN apt-get install -y git

Building and verifying the saltmasterless:latest image

Using this Containerfile, I built a saltmasterless:latest image:

workstation:~/projects/test_project.git# podman build -t saltmasterless:latest .

Then with this image, I can start a container that includes volumes mapping the tests/data/pillar ro /srv/pillar, tests/data/ext_pillar to /srv/ext_pillar, and test_project to /srv/salt:

workstation:~/projects/test_project.git# podman run -it --env "PROJECT=test_project" -v ${PWD}/test_project:/srv/salt/test_project -v ${PWD}/tests/data/pillar:/srv/pillar -v ${PWD}/tests/data/ext_pillar:/srv/ext_pillar/hosts/local/files --name test_box --hostname local saltmasterless:latest
root@local:/# find /srv
root@local:/# exit
workstation:~/projects/test_project.git# podman rm -f test_box


The setup.cfg file is mostly used to tell pytest to ignore the salt states directory:


As mentioned above the tests/data/pillar directory will be mapped to /srv/pillar in the container, but let's look at the top.sls a little closer. From the Containerfile, /etc/salt/minion_id was set to 'local', so normally the top.sls file will end up using /srv/pillar/test_zero.sls for it's pillar data.

But lets say we want to run a test with some other pillar data. In that case , in the test we'll use the salt-call '-id' argument to run the command as a different minion id. So with the top.sls file below, running 'salt-call --local -id=test_one state.apply' will use the test_one.sls pillar data instead of test_zero.sls

{{ saltenv }}:

  - match: glob
  - ignore_missing: True

  - test_zero

  - test_one

  - missing_mandatory_pillar_item

tests/ host fixture

The tests/ file includes a host fixture based on Note that the podman_cmd is pretty much the same as the command used above when testing the container. The cwd-related logic is because the -v args required full path names.

# scope='session' uses the same container for all the tests;
# scope='function' uses a new container per test function.
def host(request):

    cwd = os.getcwd()

    podman_cmd = "podman run -d -it --env PROJECT=test_project -v ${PWD}/test_project:/srv/salt/test_project -v ${PWD}/tests/data/pillar:/srv/pillar -v ${PWD}/tests/data/ext_pillar:/srv/ext_pillar/hosts/local/files --name test_box --hostname local saltmasterless:latest bash"
    podman_cmd = podman_cmd.replace("${PWD}",cwd)
    podman_cmd_list = podman_cmd.split(' ')

    # run a container
    podman_id = subprocess.check_output(podman_cmd_list).decode().strip()
    # return a testinfra connection to the container
    yield testinfra.get_host("podman://" + podman_id)

    # at the end of the test suite, destroy the container
    subprocess.check_call(['podman', 'rm', '-f', podman_id])

tests/ full salt run test

Here's a test that does a full salt state.apply on the container. This test is slow, since the container starts are with just salt and git installed, and the project-under-test is making a lot of changes. Note theuse of the '--local' argument to tell salt to try to pull data from a saltmaster.

def test_full_salt_run(host):
    print('running salt-call state.apply.  This will take a few minutes')
    cmd_output ='salt-call --state-output=terse --local state.apply')

    print('cmd.stdout: ' + cmd_output.stdout)

    assert cmd_output.rc == 0
    assert cmd_output.stderr == ''

tests/ alternative pillar data test

In this example, suppose ./test_project/map.jinja included a check like below:

{% if not salt['pillar.get']('mandatory_pillar_item') %}
  {{ raise('mandatory_pillar_item is mandatory') }}
{% endif %}

And then there's a 'missing_mandatory_pillar_item' in the ./test/data/pillar/top.sls as per above, and a ./test/data/pillar/missing_mandatory_pillar_item.sls file exists that's missing the mandatory pillar item.

Then a test like below could force a salt run that uses this pillar data by using the '--id' argument as per below, and an assertion could check the error was raised.

def test_missing_mandatory_pillar_itemn(host):
    print('running another salt-call state.apply with bad pillar data.')
    cmd_output ='salt-call --state-output=terse --local --id=missing_mandatory_pillar_item state.apply')
    assert "mandatory_pillar_item is mandatory" in cmd_output.stderr
    assert cmd_output.rc != 0

Running the tests

The tests are kicked off via 'pytest' like any other python project using pytest.`

workstation:~/projects/test_project.git# source ./env/bin/activate
(env) workstation:~/projects/test_project.git# pytest
================================================================================ 3 passed in 333.88s (0:05:33) ================================================================================

What's Next

  • Set up the salt bootstrapping so it'll work without having to reach out to
  • Move the host fixture out of /tests/ to ./tests/
  • Speed up the tests. As mentioned above, a full 'salt state.apply' for a project can take a few minutes on my workstation

Expired CA Notes

Posted on Mon 31 October 2022 in hints • Tagged with hint, openssl, x509, expiration

Recently, I ran some tests to see what would happen when my root CA cert expired, and what I'd need to do to update the cert.

Spoiler alert: Updating the CA cert was not that hard...

First I created a CA that expired in 2 hours using the code below:

from OpenSSL import crypto

#Following script will create a self signed root ca cert.
from OpenSSL import crypto, SSL
from os.path import join
import random

pubkey = "%s.crt" % CN #replace %s with CN
privkey = "%s.key" % CN # replcate %s with CN

pubkey = join(".", pubkey)
privkey = join(".", privkey)

k = crypto.PKey()
k.generate_key(crypto.TYPE_RSA, 2048)

# create a self-signed cert
cert = crypto.X509()
cert.get_subject().CN = CN
cert.gmtime_adj_notAfter(7200)  # CA is only good for 2 hours
xt = crypto.X509Extension(b'basicConstraints',1,b'CA:TRUE')

cert.sign(k, 'sha512')
pub=crypto.dump_certificate(crypto.FILETYPE_PEM, cert)
priv=crypto.dump_privatekey(crypto.FILETYPE_PEM, k)
open(privkey, "wt").write(priv.decode("utf-8") )

This block is based on how the ancient certmaster program created its CA.

Then I created a with contents "01"

> echo 01 >

Then I issued a cert against this CA:

> openssl genrsa -out pre-expired-example.key 4096
> openssl req -new -key pre-expired-example.key -out pre-expired-example.csr
> openssl x509 -req -days 365 -in pre-expired-example.csr -CA expired-ca-test.crt  -CAkey expired-ca-test.key  -CAserial -out pre-expired-example.crt
> openssl x509 -in pre-expired-example.crt -text
> openssl verify -verbose -CAfile expired-ca-test.crt pre-expired-example.crt
pre-expired-example.crt: OK

Then I waited 2 hours and went back to check the certs:

> openssl x509 -in expired-ca-test.crt -noout -enddate
notAfter=Oct  2 17:41:11 2022 GMT

Then I tested what would happen if I tried verifying the cert signed with the expired CA:

> openssl verify -verbose -CAfile expired-ca-test.crt pre-expired-example.crt
CN = expired-ca-test
error 10 at 1 depth lookup: certificate has expired
error pre-expired-example.crt: verification failed

THIS FAILED. I thought previously signed keys would continue to verify against the expired CA but new certs wouldn't be created. Instead previously signed certs won't validate against the expired CA.

Then I tried signing a new cert with the expired CA. Certainly this fail, right ?

> openssl genrsa -out expired-example.key 4096
> openssl req -new -key expired-example.key -out expired-example.csr
> openssl x509 -req -days 365 -in expired-example.csr -CA expired-ca-test.crt -CAkey expired-ca-test.key -CAserial -out expired-example.crt

THIS WORKED in that it created the cert, though verification still fails:

> openssl verify -verbose -CAfile expired-ca-test.crt expired-example.crt
CN = expired-ca-test
error 10 at 1 depth lookup: certificate has expired
error expired-example.crt: verification failed

Now lets see what happens if we update the CA cert with the script below.

This is almost the same as the script above, except the original CA key is reused instead of generating a new key. Also the CN and serial number need to be the same as the original expired CA cert.

Verification will fail if the CN or serial number values are not the same as the original CA, but unfortuanetly I didn't save the errors from when I tried using 'updated-ca-test' as the CN, or when I tried bumping up the serial number to 1.

from OpenSSL import crypto

#Following script will create a self signed root ca cert.
from OpenSSL import crypto, SSL
from os.path import join
import random

pubkey = "%s.crt" % CN #replace %s with CN
privkey = "%s.key" % CN # replcate %s with CN

pubkey = join(".", pubkey)
privkey = join(".", privkey)

# Instead of creating a new key, use the old CA's key
# nope: k = crypto.PKey()
# nope: k.generate_key(crypto.TYPE_RSA, 2048)
st_key=open('expired-ca-test.key', 'rt').read()
k = crypto.load_privatekey(crypto.FILETYPE_PEM, st_key)

# create a self-signed cert
cert = crypto.X509()
cert.get_subject().CN = 'expired-ca-test'  # keep the same CN as the old CA cert
cert.set_serial_number(0)                  # keep the same serial number as the old CA cert
cert.gmtime_adj_notAfter(63072000)  # CA is only good for 2 years
xt = crypto.X509Extension(b'basicConstraints',1,b'CA:TRUE')

cert.sign(k, 'sha512')
pub=crypto.dump_certificate(crypto.FILETYPE_PEM, cert)
priv=crypto.dump_privatekey(crypto.FILETYPE_PEM, k)
open(privkey, "wt").write(priv.decode("utf-8") )

Note that this code creates a updated-ca-test.key that's the same as expired-ca-test.key, so I could have continued using expired-ca-test.key in the cert creation below.

> diff expired-ca-test.key updated-ca-test.key
> echo $?

Next I created an file. I could have continuned using

> cp

Now let's see if the new CA can be used to create a new cert:

> openssl genrsa -out post-expired-example.key 4096
> openssl req -new -key post-expired-example.key -out post-expired-example.csr
> openssl x509 -req -days 365 -in post-expired-example.csr -CA updated-ca-test.crt  -CAkey updated-ca-test.key  -CAserial -out post-expired-example.crt
> openssl x509 -in post-expired-example.crt -text
> openssl verify -verbose -CAfile updated-ca-test.crt post-expired-example.crt
post-expired-example.crt: OK

Now verify the old cert verifies using the new CA:

> openssl verify -verbose -CAfile updated-ca-test.crt pre-expired-example.crt
pre-expired-example.crt: OK

THIS WORKED. The updated CA could be used to verify both new and previous created certs Hurray !!


An expired/expiring root CA may be a hassle, but it's not catastrophic. The biggest pain should be pushingout the updated root CA everywhere the cert is being used in your environment. If you're using an orchestration/CM tool like Salt or Ansible, updating the root CA cert shouldn't be too bad, but remember to reload or restart any services using the cert to force the updated CA cert to read.

A workaround for running deja-dup as root in Ubuntu 20.04

Posted on Mon 13 July 2020 in hints • Tagged with hint, deja-dup, ubuntu

I found a workaround for the Duplicity fails to start issue where 'sudo deja-dup' would fail with the python stacktrace mentioned in the launchpad ticket.

The ticket was not very useful, so I started looking at the various files in the stacktrace and saw line from /usr/lib/python3/dist-packages/duplicity/backends/ was within an "if u'DBUS_SESSION_BUS_ADDRESS' not in os.environ" block.

So I wondered what would happen if I let that environment variable pass into the sudo environment. I tried 'sudo -E deja-dup' as per preserve the environment. This didn't result in a stacktrace, but it ended up running the backup as the normal non-root user, probably because the preserved environment included the USER and HOME variables along with the DBUS_SESSION_BUS_ADDRESS variable.

Then I tried preserving just DBUS_SESSION_BUS_ADDRESS with 'sudo --preserve-env=DBUS_SESSION_BUS_ADDRESS deja-dup', it worked as expected.

So the hint here is that when presented with a stacktrace don't be afraid to "Use the Source, Jean Luc".

A image of Patrick Stewart playing Gurney Halleck from David Lynch's Dune film, with the meme text 'Use the Source, Jean Luc'.

How I orgmode

Posted on Mon 13 July 2020 in hints • Tagged with hint, orgmode, orgzly, owncloud

This post covers how I'm keeping my orgmode notebooks synced between my desktop and phone. There are also some tips on how I resolve merge conflicts in my notebooks and other hints for using Orgzly.


uml diagram

(The Pelican PlantUML plugin is pretty nice.)

Owncloud Server

Set up an owncloud server somewhere where both my desktop and phone can reach it.

Desktop: Owncloud client and git repo

On the desktop, install the owncloud client and configured ~/owncloud to be shared with the server. Also set up an ~/owncloud/org directory and add an ~/org symlink that points to ~/owncloud/org. This is mostly to make it easier to access the org files.

Update your emacs and/or vi configs to support orgmode. I'll update the org files in the editor that's more convenient at the time, though I'll use emacs if I need to update the deadline of a task.

I've also added .git to the owncloud and did a 'git init' in the ~/owncloud/org directory. This repo will be used in the 'Handling Conflicts' section below.

Create or move your notebook files into the ~/owncloud/org directory and make sure they have a '.org' extension. Verify that the org files are being synced to the owncloud server.

Make sure your owncloud client is set to start when your desktop reboots.

Phone: Owncloud client and Orgzly

Install the owncloud client on your phone. Configure it to sync with the owncloud server and verify it's syncing the notebook files from the server.

Also install and configure Orgzly. For the syncing the repositories use WebDAV with the URL https://<your-owncloud-server>/remote.php/webdav/org. I originally tried using local storage, but I couldn't get that to work.

Then for each of the notebooks, set the link to https://<your-owncloud-server>/remote.php/webdav/org. Update one of your notebooks with Orgzly and verify the change is synced to your desktop.

Day-to-day Usage

Throughout the day, I'm usually using Orgzly, checking the Agenda tab for overdue and upcoming tasks, and adding new tasks and notes.

I'm access the Notebooks on the desktop less frequently, mostly archiving completed tasks, and adding links and notes to research tasks. I also tend to move tasks between notebooks from the desktop.

Handling Recurring Events

I ignore the scheduled time setting in Orgzly, and only set the deadlines with warning times. Then I treat the notification that's generated from warning time passing as the time I should start on a task.

For repeating events, I use the '++' modifier for tasks. This way if I miss a few iterations of a task it will add the next iteration in the future.

I'm also try to set an appropriate warning period when setting tasks.

It took me a while to figure out I could tap on the date itself to bring up a calender instead of being limited to the 'today', 'tomorrow', and 'next week' options. <shrug>.

Handling Conflicts

Sometimes when I'm updating the notebooks on both my desktop and phone, orgzly will say there's a conflict.

When this happens I go to my desktop and make a checkpoint commit of any outstanding changes on the ~/owncloud/org repo. Then I push the Notebook from Orgzly (the Cloud with the 'up' arrow.

Then on the desktop, I do a 'git diff' and adjust Notebook as needed.

Usually this includes adding some new notes or adjusting some deadlines.

Importing Stock Transactions Into Gnucash With Python

Posted on Sun 21 June 2020 in hints • Tagged with hint, python, gnucash

I use GnuCash, but keeping my 401(k) accounts up to date has always been a tedious, manual process.

I came up with the python snippet below to handle the import, but it was a pain to write since I couldn't find any examples for setting up stock or mutual fund transactions.

The SetSharePriceAndAmount method ended up being key, and I only found that by searching through the gnucash source.

The GncNumeric class also ended up being more of a pain to use than I expected. There's probably a better way to use it, but the 'multiple values by 1000000/100000' approach is working for me now.

I'm using the stock GnuCash and python-gnucash version 2.6.19 available in Ubuntu 18.04, so this stuck using python 2.7.


import csv
from datetime import datetime

import gnucash

session = gnucash.Session("xml://yourfile.gnucash")
book =
root_account = book.get_root_account()

usd = book.get_table().lookup('ISO4217','USD')

# There's probably a better way to use 'Your:Retirement:Contributions' instead ....
contrib_acct = root_account.lookup_by_name("Your").lookup_by_name("Retirement").lookup_by_name("Contributions")

parent_acct = root_account.lookup_by_name("401k")

with open('your_transactions.csv', 'rb') as trans_csv:
  trans_reader = csv.reader(trans_csv, delimiter=',')

  # Skip over the first row since it's headers
  header = next(trans_reader)

  for description, date, fund_name, share_price_str, share_amount_str, amount_str in trans_reader:
    child_account = parent_acct.lookup_by_name(fund_name)

    posting_date = datetime.strptime(date,"%m/%d/%y")

    tx = gnucash.Transaction(book)



    sp1 = gnucash.Split(book)

    # GncNumeric(n,d) represents numbers as fractions of the form n/d, so GncNumeric(1234567/1000000) = 1.234567
    # There's probably a better way to do this...
    share_price = gnucash.GncNumeric(float(share_price_str)*(10**6), 10**6)
    share_amount = gnucash.GncNumeric(float(share_amount_str)*(10**6), 10**6)

    # share_price * share_amount == amount, so I could have used that instead using the value from the csv
    amount = gnucash.GncNumeric(float(amount_str)*(10**6), 10**6)

    # ( ˘▽˘)っ♨  This is the secret sauce for setting the number of shares and the price.
    sp1.SetSharePriceAndAmount(share_price, share_amount)

    sp2 = gnucash.Split(book)


Special thanks to this post for providing most of the code above.

Elixir on Centos 7

Posted on Sun 08 October 2017 in hints • Tagged with hints, elixir, centos

I'm at the point with a Elixir/Phoneix side project that I'm thinking about deployment.

The first big stumbling block was that development envioronment (Ubuntu 16.04) wasn't the same as my deployment environment (Centos 7). For Ubuntu, I could pull the latest Elixir packages from Erlang Solutions, but they don't host Centos Elixir packages, and the version of Elixir on EPEL is over 4 years old - old enough that 'mix deps' on my project was erroring out.

I found the posts below about installing Elixir on Centos 7 but, they involve cloning the Elixir repo and building it from source, but I don't want development tools like git and gcc on my production machines.

I also found but it involves downloading a precompiled mystery meat bundle off Github. However doesn't mention that these precompiled bundles are available or how the bundles were built. The precompiled bundle is mentioned on the install page, but still, that's too mysterious for me.

Maybe one of those links will work for you, but what I want is a way to build a more recent version of the Elixir rpm than what was available in EPEL. That way I can recompile and package Elixir on a dev machine, and then only copy the rpm up to my production machine.

It looks like Elixir is stuck in EPEL waiting for the Erlang 18 release to get promoted in EPEL, so maybe I can take the existing Elixir packaging source and build it against the latest Erlang packages from Erlang Solutions...

I found the packaging source at and after poking around a bit I came up with Vagrantfile below. It seems to be working OK so far. :

Lessons Learned

  • spectool is my new favorite utility.
  • Make sure your Elixir development environment is as close as possible to your deployment environment.

Next Steps

  • Convert the Vagrantfile into a Dockerfile or start using vagrant-lxc

Lessons from Wriing a Pylint Plugin

Posted on Sun 24 September 2017 in lessons • Tagged with lessons, python, pylint, peer reviews

At work there's a python coding convention that I tend to overlook a lot. So when I post merge requests, there's a pretty good chance someone's going to call me out on this, which leads to a followup commit and another round of peer review. This can lead to an extra delay of a few hours until I notice the comments, switch context back to that merge request, making the changes, update the merge request and wait for another round of reviews. If I could find a way to check my code for this convention before posting the merge requsts, I could get my code merged in a few hours faster....

The Convention

The coding convention I cannot internalize is as follows: In python, the format method for strings will call the __format__ method on its arguments for you, so any code that looks like:

"interpolate these: {} {}".format(str(a), str(b))

Need only look like:

"interpolate me: {} {}".format(a, b)

The Pylint Plugin

So googling around led my to this Ned Batchelder post from a few years back. That post also led to a couple pylint plugins here. Looking at pylint's own format checker reminded me that I should also be handling keyword arguments.

From the post and sample code, it looked like I needed to define a checker class with a visit_callfunc method that would check when the 'format' method was used, and then check all the arguments to the format call and throw an error if any of them where a function call to str().

Here's what I eventually ended up.

To come up with this I used an embarassing amount of exploratory programming to figure out astroid. I wrote an initial visit_callfunc() method based on the sample code that didn't do much more than dump out all the data about the node argument via dir(node) and node.__dict__. Then I would call pylint with the plugin against some sample source with the error I was trying to plugin to report.

I run the plugin against the existing code and found one lingering case where the reviewers had allowed one of my unneccessary str() call into the codebase. It's been removed now.

Lessons Learned

  • pylint plugins are pretty powerful and I wouldn't shy away from writing another one. I'm on the lookout for other excuses to write another one.
  • is a useful 'missing manual' for the python AST.
  • format() can take both positional and keyword arguments. My original pass at the plugin only supported positional arguments.
  • The bandit project exists and looks useful. I stumbled acros it while looking for other pylint plugins.

Using pm-utils to save/restore VMs on workstation suspend/restore

Posted on Sat 16 September 2017 in hints • Tagged with hints, ubuntu, vagrant, pm

I use Ubuntu (16.04 for now) and Vagrant (1.9.0 for now) on a bunch of my projects, and I've been running into somethinng like this power management bug for a while now, where after restoring from suspension, my vagrant sessions would be dead and I'd have to 'vagrant halt' and 'vagrant up' before another 'vagrant ssh' would succeed.

To work around this, I came up with some /etc/pm/sleep.d scripts which would save any running vagrant boxes when suspending the workstation and then resume the VMs when resuming the workstation.

Now if I'm in a 'vagrant ssh' session and Ubuntu suspends/resumes, instead of coming back to a frozen session, I'll see I've been disconnected from the ssh session, and I can do another 'vagrant ssh' without having to halt/re-up the VM. That's better than nothing, but the next step here is to start using something like screen or tmux in my vagrant sessions so I can restore right back to where I left off.

So why bother with two scripts when you could have 1 script with a single case statement ? I wanted saving the running vagrant boxes to happen when all the usual services and userspace infrastructure was still running, so I wanted that script in the 00 -49 range from as per the 'Sleep Hook Ordering Convention' portion of 'man 8 pm-action`. However I don't restoration to happen until all the services restarted, so I pushed to the end of the service handling hook range. I may want to revisit this, and rename it to 75_vagrant.

Note in the resume script, the command is pushed into the background since I didn't want want to wait for the VMs to be restored before resuming Ubuntu. I'm usually checking email or the web for a bit before going back to my VMs so I'm OK if that's ready immediately.

Here are some other lessons I learned from these scripts:

The first script is /etc/pm/sleep.d/01_vagrant:


YOURNAME="your normal nonroot user name"

case "$1" in
        timestamp=`date --rfc-3339=seconds`
        echo "${timestamp}: $0 output" >> /var/log/pm-suspend-vagrant.log
        (/sbin/runuser -u ${YOURNAME} /usr/bin/vagrant global-status | grep running | awk '{ print $1; }' | xargs -L1 -I % runuser -u ${YOURNAME} vagrant suspend % ) >> /var/log/pm-suspend-vagrant.log

# Don't let errors above stop suspension

The second script is /etc/pm/sleep.d/


YOURNAME="your normal nonroot user name"

case "$1" in
        # Push the restoration into the background so it doesn't slow down
        timestamp=`date --rfc-3339=seconds`
        ((/sbin/runuser -u ${YOURNAME} /usr/bin/vagrant global-status | grep saved | awk '{ print $1; }' | xargs -L1 -I % runuser -u ${YOURNAME} vagrant resume % ) >> /var/log/pm-resume-vagrant.log) &

# Don't let errors above stop restoration

Sources: -