do-rsync.sh
cache
content/pages/resume.pdf
+.mypy_cache/
--- /dev/null
+#########################################################
+A workaround for running deja-dup as root in Ubuntu 20.04
+#########################################################
+
+:date: 2020-07-13
+:tags: hint, deja-dup, ubuntu
+:category: hints
+:author: Jude N
+
+I found a workaround for the `Duplicity fails to start`_ issue where :code:`'sudo deja-dup'` would fail with the python stacktrace mentioned in the launchpad ticket.
+
+The ticket was not very useful, so I started looking at the various files in the stacktrace and saw line from /usr/lib/python3/dist-packages/duplicity/backends/giobackend.py was within an :code:`"if u'DBUS_SESSION_BUS_ADDRESS' not in os.environ"` block.
+
+So I wondered what would happen if I let that environment variable pass into the sudo environment. I tried :code:`'sudo -E deja-dup'` as per `preserve the environment`_. This didn't result in a stacktrace, but it ended up running the backup as the normal non-root user, probably because the preserved environment included the USER and HOME variables along with the DBUS_SESSION_BUS_ADDRESS variable.
+
+Then I tried preserving just DBUS_SESSION_BUS_ADDRESS with :code:`'sudo --preserve-env=DBUS_SESSION_BUS_ADDRESS deja-dup'`, it worked as expected.
+
+So the hint here is that when presented with a stacktrace don't be afraid to "Use the Source, Jean Luc".
+
+.. image:: {static}/images/use-the-source-jean-luc.png
+ :align: center
+ :alt: A image of Patrick Stewart playing Gurney Halleck from David Lynch's Dune film, with the meme text 'Use the Source, Jean Luc'.
+
+
+.. _Duplicity fails to start: https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1855736
+.. _preserve the environment: https://www.petefreitag.com/item/877.cfm
--- /dev/null
+################
+Expired CA Notes
+################
+
+:date: 2022-10-31
+:tags: hint, openssl, x509, expiration
+:category: hints
+:author: Jude N
+
+Recently, I ran some tests to see what would happen when my root CA cert expired, and what I'd need to do to update the cert.
+
+Spoiler alert: Updating the CA cert was not that hard...
+
+First I created a CA that expired in 2 hours using the new-ca.py code below:
+
+.. code-block:: python
+
+ from OpenSSL import crypto
+
+ #Following script will create a self signed root ca cert.
+ from OpenSSL import crypto, SSL
+ from os.path import join
+ import random
+
+ CN='expired-ca-test'
+ pubkey = "%s.crt" % CN #replace %s with CN
+ privkey = "%s.key" % CN # replcate %s with CN
+
+ pubkey = join(".", pubkey)
+ privkey = join(".", privkey)
+
+ k = crypto.PKey()
+ k.generate_key(crypto.TYPE_RSA, 2048)
+
+ # create a self-signed cert
+ cert = crypto.X509()
+ cert.get_subject().CN = CN
+ cert.set_serial_number(0)
+ cert.gmtime_adj_notBefore(0)
+ cert.gmtime_adj_notAfter(7200) # CA is only good for 2 hours
+ cert.set_issuer(cert.get_subject())
+ cert.set_subject(cert.get_subject())
+ cert.set_version(2)
+ xt = crypto.X509Extension(b'basicConstraints',1,b'CA:TRUE')
+ cert.add_extensions((xt,))
+
+ cert.set_pubkey(k)
+ cert.sign(k, 'sha512')
+ pub=crypto.dump_certificate(crypto.FILETYPE_PEM, cert)
+ priv=crypto.dump_privatekey(crypto.FILETYPE_PEM, k)
+ open(pubkey,"wt").write(pub.decode("utf-8"))
+ open(privkey, "wt").write(priv.decode("utf-8") )
+
+This block is based on how the ancient certmaster program `created its CA`_.
+
+Then I created a expired-ca-test.srl with contents "01"
+
+.. code-block:: bash
+
+ > echo 01 > expired-ca-test.srl
+
+Then I issued a cert against this CA:
+
+.. code-block:: bash
+
+ > openssl genrsa -out pre-expired-example.key 4096
+ > openssl req -new -key pre-expired-example.key -out pre-expired-example.csr
+ > openssl x509 -req -days 365 -in pre-expired-example.csr -CA expired-ca-test.crt -CAkey expired-ca-test.key -CAserial expired-ca-test.srl -out pre-expired-example.crt
+ > openssl x509 -in pre-expired-example.crt -text
+ > openssl verify -verbose -CAfile expired-ca-test.crt pre-expired-example.crt
+ pre-expired-example.crt: OK
+
+Then I `waited 2 hours`_ and went back to check the certs:
+
+.. code-block:: bash
+
+ > openssl x509 -in expired-ca-test.crt -noout -enddate
+ notAfter=Oct 2 17:41:11 2022 GMT
+
+Then I tested what would happen if I tried verifying the cert signed with the expired CA:
+
+.. code-block:: bash
+
+ > openssl verify -verbose -CAfile expired-ca-test.crt pre-expired-example.crt
+ CN = expired-ca-test
+ error 10 at 1 depth lookup: certificate has expired
+ error pre-expired-example.crt: verification failed
+
+
+THIS FAILED. I thought previously signed keys would continue to verify against the expired CA but new certs wouldn't be created. Instead previously signed certs won't validate against the expired CA.
+
+Then I tried signing a new cert with the expired CA. Certainly this fail, right ?
+
+.. code-block:: bash
+
+ > openssl genrsa -out expired-example.key 4096
+ > openssl req -new -key expired-example.key -out expired-example.csr
+ > openssl x509 -req -days 365 -in expired-example.csr -CA expired-ca-test.crt -CAkey expired-ca-test.key -CAserial expired-ca-test.srl -out expired-example.crt
+
+THIS WORKED in that it created the cert, though verification still fails:
+
+.. code-block:: bash
+
+ > openssl verify -verbose -CAfile expired-ca-test.crt expired-example.crt
+ CN = expired-ca-test
+ error 10 at 1 depth lookup: certificate has expired
+ error expired-example.crt: verification failed
+
+
+Now lets see what happens if we update the CA cert with the update-ca.py script below.
+
+This is almost the same as the new-ca.py script above, except **the original CA key is reused instead of generating a new key**. Also **the CN and serial number need to be the same as the original expired CA cert**.
+
+Verification will fail if the CN or serial number values are not the same as the original CA, but unfortuanetly I didn't save the errors from when I tried using 'updated-ca-test' as the CN, or when I tried bumping up the serial number to 1.
+
+.. code-block:: python
+
+ from OpenSSL import crypto
+
+ #Following script will create a self signed root ca cert.
+ from OpenSSL import crypto, SSL
+ from os.path import join
+ import random
+
+ CN='updated-ca-test'
+ pubkey = "%s.crt" % CN #replace %s with CN
+ privkey = "%s.key" % CN # replcate %s with CN
+
+ pubkey = join(".", pubkey)
+ privkey = join(".", privkey)
+
+ # Instead of creating a new key, use the old CA's key
+ # nope: k = crypto.PKey()
+ # nope: k.generate_key(crypto.TYPE_RSA, 2048)
+ st_key=open('expired-ca-test.key', 'rt').read()
+ k = crypto.load_privatekey(crypto.FILETYPE_PEM, st_key)
+
+ # create a self-signed cert
+ cert = crypto.X509()
+ cert.get_subject().CN = 'expired-ca-test' # keep the same CN as the old CA cert
+ cert.set_serial_number(0) # keep the same serial number as the old CA cert
+ cert.gmtime_adj_notBefore(0)
+ cert.gmtime_adj_notAfter(63072000) # CA is only good for 2 years
+ cert.set_issuer(cert.get_subject())
+ cert.set_subject(cert.get_subject())
+ cert.set_version(2)
+ xt = crypto.X509Extension(b'basicConstraints',1,b'CA:TRUE')
+ cert.add_extensions((xt,))
+
+ cert.set_pubkey(k)
+ cert.sign(k, 'sha512')
+ pub=crypto.dump_certificate(crypto.FILETYPE_PEM, cert)
+ priv=crypto.dump_privatekey(crypto.FILETYPE_PEM, k)
+ open(pubkey,"wt").write(pub.decode("utf-8"))
+ open(privkey, "wt").write(priv.decode("utf-8") )
+
+Note that this code creates a updated-ca-test.key that's the same as expired-ca-test.key, so I could have continued using expired-ca-test.key in the cert creation below.
+
+.. code-block:: bash
+
+ > diff expired-ca-test.key updated-ca-test.key
+ > echo $?
+ 0
+
+
+Next I created an updated-ca-test.srl file. I could have continuned using expired-ca-test.srl
+
+.. code-block:: bash
+
+ > cp expired-ca-test.srl updated-ca-test.srl
+
+Now let's see if the new CA can be used to create a new cert:
+
+.. code-block:: bash
+
+ > openssl genrsa -out post-expired-example.key 4096
+ > openssl req -new -key post-expired-example.key -out post-expired-example.csr
+ > openssl x509 -req -days 365 -in post-expired-example.csr -CA updated-ca-test.crt -CAkey updated-ca-test.key -CAserial updated-ca-test.srl -out post-expired-example.crt
+ > openssl x509 -in post-expired-example.crt -text
+ > openssl verify -verbose -CAfile updated-ca-test.crt post-expired-example.crt
+ post-expired-example.crt: OK
+
+Now verify the old cert verifies using the new CA:
+
+.. code-block:: bash
+
+ > openssl verify -verbose -CAfile updated-ca-test.crt pre-expired-example.crt
+ pre-expired-example.crt: OK
+
+THIS WORKED. The updated CA could be used to verify both new and previous created certs Hurray !!
+
+Conclusion
+==========
+
+An expired/expiring root CA may be a hassle, but it's not catastrophic. The biggest pain should be pushingout the updated root CA everywhere the cert is being used in your environment. If you're using an orchestration/CM tool like Salt or Ansible, updating the root CA cert shouldn't be too bad, but remember to reload or restart any services using the cert to force the updated CA cert to read.
+
+Sources
+=======
+- https://serverfault.com/questions/306345/certification-authority-root-certificate-expiry-and-renewal
+- https://gist.github.com/mohanpedala/468cf9cef473a8d7610320cff730cdd1
+
+
+.. _`created its CA`: https://github.com/jude/certmaster/blob/master/certmaster/certs.py#L92
+.. _`waited 2 hours`: https://store.steampowered.com/app/1366540/Dyson_Sphere_Program/
+
+
--- /dev/null
+#############
+How I orgmode
+#############
+
+:date: 2020-07-13
+:tags: hint, orgmode, orgzly, owncloud
+:category: hints
+:author: Jude N
+
+This post covers how I'm keeping my `orgmode`_ notebooks synced between my desktop and phone. There are also some tips on how I resolve merge conflicts in my notebooks and other hints for using Orgzly.
+
+Setup
+=====
+
+.. uml::
+
+ node workstation {
+ folder "orgs" as workstation_orgs
+ }
+
+
+ node laptop {
+ folder "orgs" as laptop_orgs
+ }
+
+ cloud ownCloud {
+ folder "orgs" as owncloud_orgs
+ }
+
+ node phone {
+ node Orgzly {
+ folder notebooks
+ }
+ }
+
+ workstation_orgs - owncloud_orgs
+ owncloud_orgs - notebooks
+ owncloud_orgs -- laptop_orgs
+
+(The `Pelican PlantUML plugin`_ is pretty nice.)
+
+Owncloud Server
+---------------
+
+Set up an `owncloud`_ server somewhere where both my desktop and phone can reach it.
+
+Desktop: Owncloud client and git repo
+-------------------------------------
+
+On the desktop, install the owncloud client and configured ~/owncloud to be shared with the server. Also set up an ~/owncloud/org directory and add an ~/org symlink that points to ~/owncloud/org. This is mostly to make it easier to access the org files.
+
+Update your emacs and/or vi configs to support orgmode. I'll update the org files in the editor that's more convenient at the time, though I'll use emacs if I need to update the deadline of a task.
+
+I've also added .git to the owncloud and did a 'git init' in the ~/owncloud/org directory. This repo will be used in the 'Handling Conflicts' section below.
+
+Create or move your notebook files into the ~/owncloud/org directory and make sure they have a '.org' extension. Verify that the org files are being synced to the owncloud server.
+
+Make sure your owncloud client is set to start when your desktop reboots.
+
+Phone: Owncloud client and Orgzly
+---------------------------------
+
+Install the `owncloud client`_ on your phone. Configure it to sync with the owncloud server and verify it's syncing the notebook files from the server.
+
+Also install and configure `Orgzly`_. For the syncing the repositories `use WebDAV` with the URL https://<your-owncloud-server>/remote.php/webdav/org. I originally tried using local storage, but I couldn't get that to work.
+
+Then for each of the notebooks, `set the link` to https://<your-owncloud-server>/remote.php/webdav/org. Update one of your notebooks with Orgzly and verify the change is synced to your desktop.
+
+
+Day-to-day Usage
+================
+
+Throughout the day, I'm usually using Orgzly, checking the Agenda tab for overdue and upcoming tasks, and adding new tasks and notes.
+
+I'm access the Notebooks on the desktop less frequently, mostly archiving completed tasks, and adding links and notes to research tasks. I also tend to move tasks between notebooks from the desktop.
+
+
+Handling Recurring Events
+-------------------------
+
+I ignore the scheduled time setting in Orgzly, and only set the deadlines with warning times. Then I treat the notification that's generated from warning time passing as the time I should start on a task.
+
+For repeating events, I use the '++' modifier for tasks. This way if I miss a few iterations of a task it will add the next iteration in the future.
+
+I'm also try to set an appropriate warning period when setting tasks.
+
+It took me a while to figure out I could tap on the date itself to bring up a calender instead of being limited to the 'today', 'tomorrow', and 'next week' options. <shrug>.
+
+Handling Conflicts
+------------------
+Sometimes when I'm updating the notebooks on both my desktop and phone, orgzly will say there's a conflict.
+
+When this happens I go to my desktop and make a checkpoint commit of any outstanding changes on the ~/owncloud/org repo. Then I push the Notebook from Orgzly (the Cloud with the 'up' arrow.
+
+Then on the desktop, I do a 'git diff' and adjust Notebook as needed.
+
+Usually this includes adding some new notes or adjusting some deadlines.
+
+.. _orgmode: https://orgmode.org/
+.. _orgzly: http://www.orgzly.com/
+.. _owncloud: https://owncloud.org/
+.. _owncloud client: https://owncloud.com/apps/
+.. _set the link: http://www.orgzly.com/help#4268ee7
+.. _use WebDAV: http://www.orgzly.com/help#sync-repo-webdav
+.. _Pelican PlantUML plugin: https://github.com/getpelican/pelican-plugins/tree/master/plantuml
--- /dev/null
+################################################
+An Exploration into Jinja2 Unit Tests Wth Pytest
+################################################
+
+:date: 2023-11-20
+:tags: lessons,jinja2,pytest,salt
+:category: lessons
+:author: Jude N
+
+This post covers an approach I've used for add pytest-style unit tests to my Salt-related projects.
+
+The content below may be specific to Salt, but the testing techniques should work on any project using Jinja2.
+
+Project Directory Structure
+===========================
+
+In my existing salt project repo, I added a tests subdirectory, a setup.cfg with the contents below, and added a test target to the project's Makefile.
+
+I also installed the pytest and jinja2 eggs in a virtualenv in my working directory.
+
+::
+
+ ├─ test_project repo
+ ├─── .git
+ ├─── init.sls
+ ├─── map.jinja
+ ├─── templates
+ ├─────── some_template.cfg
+ ├─── tests
+ ├─────── conftest.py
+ ├─────── test_init.py
+ ├─────── test_map.py
+ ├─────── test_some_template.py
+ ├─── setup.cfg
+ ├─── Makefile
+ ├─── env
+ ├─────── ... pytest
+ ├─────── ... jinja2
+
+Here's a snippet of the Makefile for kicking off the tests:
+
+Note that since the tests are python code, you should run them through whatever linters and style-checkers you're using on the rest of your python code.
+
+::
+
+ test:
+ py.test-3
+ pycoderstyle ./tests/*.py
+
+
+And here's the setup.cfg:
+
+Without the extra '--tb=native' argument, pytest would sometimes through an internal error when jinja ended up throwing exception, as we'll see later below.
+
+::
+
+ [tool:pytest]
+ python_files = test_*.py tests/__init__.py tests/*/__init__.py
+ #uncomment the line below for full unittest diffs
+ addopts =
+ # Any --tb option except native (including no --tb option) throws an internal pytest exception
+ # jinja exceptions are thrown
+ --tb=native
+ # Uncomment the next line for verbose output
+ # -vv
+
+ [pycodestyle]
+ max-line-length=999
+ ignore=E121,E123,E126,E226,E24,E704,E221,E127,E128,W503,E731,E131,E402
+
+
+Note there is a test_*.py file for each file that includes Jinja2 markup.
+
+tests/conftest.py
+=================
+
+The conftest.py contains the common fixtures used by the tests. I've tried adding docstring
+comments to explain how to use the fixtures, but also see the examples.
+
+::
+
+ import pytest
+ from unittest.mock import Mock
+
+ import jinja2
+ from jinja2 import Environment, FileSystemLoader, ChoiceLoader, DictLoader, StrictUndefined
+
+
+ class RaiseException(Exception):
+ """ Exception raised when using raise() in the mocked Jinja2 context"""
+ pass
+
+ @pytest.fixture()
+ def mocked_templates(request):
+ """ A dict of template names to template content.
+ Use this to mock out Jinja 'import "template" as tmpl" lines.
+ """
+ mocked_templates = {}
+ return mocked_templates
+
+
+ @pytest.fixture(autouse=True)
+ def jinja_env(request, mocked_templates):
+ """ Provide a Jinja2 environment for loading templates.
+ The ChoiceLoader will first check when mocking any 'import' style templates,
+ Then the FileSystemLoader will check the local file system for templates.
+
+ The DictLoader is first so the Jinja won't end up using the FileSystemLoader for
+ templates pulled in with an import statement which doesn't include the 'with context'
+ modifier.
+
+ Setting undefined to StrictUndefined throws exceptions when the templates use undefined variables.
+ """
+
+ test_loader=ChoiceLoader([
+ DictLoader(mocked_templates),
+ FileSystemLoader('.'),
+ ])
+
+ env = Environment(loader=test_loader,
+ undefined=StrictUndefined,
+ extensions=['jinja2.ext.do', 'jinja2.ext.with_', 'jinja2.ext.loopcontrols'])
+
+ return env
+
+
+ @pytest.fixture(scope='session', autouse=True)
+ def salt_context():
+ """ Provide a set of common mocked keys.
+ Currently this is only the 'raise' key for mocking out the raise() calls in the templates,
+ and an empty 'salt' dict for adding salt-specific mocks.
+ """
+
+ def mocked_raise(err):
+ raise RaiseException(err)
+
+ context = {
+ 'raise': mocked_raise,
+ 'salt': {}
+ }
+
+ return context
+
+init.sls
+========
+For purposes of the sections below, here's what the init.sls looks like:
+
+::
+
+ #!jinja|yaml
+ # {% set version = salt['pillar.get']('version', 'latest') %}
+ # version: {{ version }}
+
+ # {% if version == 'nope' %}
+ # {{ raise("OH NO YOU DIDN'T") }}
+ # {% endif %}
+
+
+
+Mock out the Jinja Context
+==========================
+
+Let's test out that rendering init.sls should return a version key with some value.
+
+Being able to mock out the salt pillar.get() function was a big breakthrough with respect to being able to write any sort of unit tests for the Salt states.
+
+::
+
+ @pytest.fixture
+ def poc_context(self, salt_context):
+ """ Provide a proof-of-concept context for mocking out salt[function](args) calls """
+ poc_context = salt_context.copy()
+
+ def mocked_pillar_get(key,default):
+ """ Mocked salt['pillar.get'] function """
+ pillar_data = {
+ 'version' : '1234'
+ }
+ return pillar_data.get(key, default)
+
+ # This is the super sauce:
+ # We can mock out the ``salt['function'](args)`` calls in the salt states by
+ # defining a 'salt' dict in the context, who's keys are the functions, and the values of mocked functions
+ poc_context['salt']['pillar.get'] = mocked_pillar_get
+
+ return poc_context
+
+
+ def test_jinja_template_poc(self, jinja_env, poc_context):
+ """ Render a template and check it has the expected content """
+
+ # This assumes the tests are run from the root of the project.
+ # The conftest.py file is setting the jinja_env to look for files under the 'latest' directory
+ template = jinja_env.get_template('init.sls')
+
+ # Return a string of the rendered template.
+ result = template.render(poc_context)
+
+ # Now we can run assertions on the returned rendered template.
+ assert "version: 1234" in result
+
+
+Mocking a raise() error
+=======================
+
+Now, let's see how we can test triggering the raise() error based on the pillar data:
+
+::
+
+ @pytest.fixture
+ def bad_context(self, salt_context):
+ """ Lets see what happens if the template triggers a raise() """
+
+ # The base salt_context from conftest.py includes a 'raise' entry that raises a RaiseException
+ bad_context = salt_context.copy()
+ bad_context['salt']['pillar.get'] = lambda k, d: 'nope'
+ return bad_context
+
+ def test_raise_poc(self, jinja_env, bad_context):
+ """ Try rendering a template that should fail with some raise() exception """
+
+ with pytest.raises(RaiseException) as exc_info:
+ template = jinja_env.get_template('init.sls')
+ result = template.render(bad_context)
+
+ raised_exception = exc_info.value
+ assert str(raised_exception) == "OH NO YOU DIDN'T"
+
+
+Mocking imported templates
+==========================
+
+Sometimes the Jinja templates may try import other templates which are either out of scope with respect to the current project, or the import doesn't include the 'with context' modifier, so the Jinja context isn't available when rendering the template.
+
+In this case we can used with DictLoader portion of the jinja_env to mock out importing the template.
+
+In this example, lets assume the following template file exists in the templates directory:
+
+::
+
+ {%- import 'missing.tmpl' as missing -%}
+ Can we mock out missing/out of scope imports ?
+
+ Mocked: {{ missing.perhaps }}
+ Macro Call: {{ missing.lost('forever' }}
+
+Now here is a test that can mock out the missing.tmpl contents, including the lost() macro call:
+
+::
+
+ def test_missing_template(self, jinja_env, mocked_templates, salt_context):
+ """
+ In this example, templates/missing-import.tmpl tries to import a non-available 'missing.tmpl' template.
+ The ChoiceLoader checks DictLoader loader, which checks mocked_templates and finds a match
+ """
+
+ mocked_templates['missing.tmpl'] = """
+ {% set perhaps="YES" %}
+ {% macro lost(input) %}MOCKED_LOST{% endmacro %}
+ """
+ missing_template = jinja_env.get_template('templates/missing-import.tmpl')
+ missing_result = missing_template.render(salt_context)
+ assert "Mocked: YES" in missing_result
+ assert "Macro Call: MOCKED_LOST" in missing_result
+
+
+Mocking a macro call
+====================
+
+Let's say I have a Jinja2 macro defined below:
+
+::
+
+ #!jinja|yaml
+
+ # {% macro test_macro(input) %}
+ # {% if input == 'nope' %}
+ # {{ raise("UNACCEPTABLE") }}
+ # {% endif %}
+ # {% set version = salt['pillar.get']('version', 'latest') %}
+ "macro sez {{ input }}":
+ test.show_notification:
+ - text: "{{ input }}"
+
+ "version sez {{ version }}":
+ test.show_notifications:
+ - text: "{{ version }}"
+
+ # {% endmacro %}
+
+Testing out this macro is a little more involved, since first we have to include calls to the macro after rendering the template. Note we're reusing the poc_context fixture defined earlier so the pillar.get() call is still mocked out to return 1234 for the version"
+
+::
+
+ def test_get_pillar_from_macro(self, jinja_env, poc_context):
+ """
+ If we want to reference the mocked context in the macros, we need
+ to render the source + macro call within a context.
+ """
+
+ # The '[0]' is because get source returns a (source,filename,up-to-date) tuple.
+ template_source = jinja_env.loader.get_source(jinja_env, 'macro.sls')[0]
+ new_template = jinja_env.from_string(template_source + "{{ test_macro('hello') }}")
+ result = new_template.render(poc_context)
+
+ assert "macro sez hello" in result
+ assert "version sez 1234" in result
+
+It's also possible to check that the macro raises an error based on the input:
+
+::
+
+ def test_raise_from_macro(self, jinja_env, salt_context):
+ """
+ In this test, try forcing a raise() from within a macro
+ """
+
+ with pytest.raises(RaiseException) as exc_info:
+ template_source = jinja_env.loader.get_source(jinja_env, 'macro.sls')[0]
+ new_template = jinja_env.from_string(template_source + "{{ test_macro('nope') }}")
+ result = new_template.render(salt_context)
+
+ raised_exception = exc_info.value
+ assert str(raised_exception) == "UNACCEPTABLE"
+
+FECUNDITY: Checking for undefined variables during template rendering
+=====================================================================
+Back in the day I learned that one of the virtues of a scientific theory was 'fecundity', or the ability for the theory to predict new behavior the original theory hadn't considered.
+
+It looks like this may be called `fruitfulness`_ now, but still whenever I stumble across something like this, I shout out 'FECUNDITY' internally to myself. :shrug:
+
+While I was working on this project, I noticed the jinja Environment constructor has an `undefined`_ argument that defaulted to Undefined. I also noticed `StrictUndefined`_ was another value that the undefined argument could use.
+
+It would be useful if the tests could throw exceptions when they ran into undefined variables. This could happen from typos in the templates, or possibly not mocking out all the globals variables used in a template.
+
+So I tried making an jinja Environment with undefined=StrictUndefined, and wrote a test with template with a typo in a variable name to see if the test would raise an exception, and it did !
+
+This is much more useful than the default jinja behavior where Jinja would give the typo a value of None and would likely surface in the output as a empty string.
+
+It's also more useful than setting undefined to `DebugUndefined`_, which sometimes raised an exception, but sometimes left the un-modified '{{ whatever }}' strings in the rendered templates. Bleh.
+
+Here's the sample template I used, called unexpected_variable.sls. It's the same as the original init.sls, but with a 'verion' typo:
+
+::
+
+ #!jinja|yaml
+
+ # {% set version = salt['pillar.get']('role:echo-polycom:version', 'latest') %}
+ # version: {{ version }}
+
+ # {% if verion == 'nope' %}
+ # {{ raise("OH NO YOU DIDN'T") }}
+ # {% endif %}
+
+And let's try adding this test, which is the same as the earlier test_jinja_template_poc() test, but with the buggy template:
+
+::
+
+ def test_unexpected_variable(self, jinja_env, poc_context):
+ """ Render a template and check it has the expected content """
+
+ # This assumes the tests are run from the root of the project.
+ # The conftest.py file is setting the jinja_env to look for files under the 'latest' directory
+ template = jinja_env.get_template('unexpected_variable.sls')
+
+ # Return a string of the rendered template.
+ result = template.render(poc_context)
+
+ # Now we can run assertions on the returned rendered template.
+ assert "version: 1234" in result
+
+This test will fail with an undefined error exception below !
+Cool. I can fix the typo, and rerun the test to get it passing again ! FECUNDITY !
+
+::
+
+ ==================================================== FAILURES =======================================================
+ _________________________________________ TestJinja.test_unexpected_variable __________________________________________
+ Traceback (most recent call last):
+ File "/my/working/dir/test_jinja_template_poc.py", line 150, in test_unexpected_variable
+ result = template.render(poc_context)
+ File "/usr/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
+ return original_render(self, *args, **kwargs)
+ File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
+ return self.environment.handle_exception(exc_info, True)
+ File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
+ reraise(exc_type, exc_value, tb)
+ File "/usr/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
+ raise value.with_traceback(tb)
+ File "unexpected_variable.sls", line 6, in top-level template code
+ # {% if verion == 'nope' %}
+ jinja2.exceptions.UndefinedError: 'verion' is undefined
+ ========================================= 1 failed, 5 passed in 0.89 seconds ==========================================
+
+
+Running the tests
+=================
+
+The tests are kicked off via 'pytest' like any other python project using pytest.
+
+.. code-block:: shell-session
+
+ workstation:~/projects/test_project.git# source ./env/bin/activate
+ (env) workstation:~/projects/test_project.git# pytest
+ ===================================================================== test session starts =====================================================================
+ platform linux -- Python 3.6.8, pytest-2.9.2, py-1.4.32, pluggy-0.3.1
+ rootdir: /vagrant, inifile:
+ plugins: catchlog-1.2.2
+ collected 5 items
+
+ latest/tests/test_jinja_template_poc.py .....
+
+Credit
+======
+
+I based this work on some ideas from the blog post `A method of unit testing Jinja2 templates`_ by `alexharv074`_.
+
+
+.. _`A method of unit testing Jinja2 templates` : https://alexharv074.github.io/2020/01/18/a-method-of-unit-testing-jinja2-templates.html
+.. _`alexharv074` : https://alexharv074.github.io/
+.. _`fruitfulness` : https://link.springer.com/article/10.1007/s11229-017-1355-6
+.. _`undefined`: https://jinja.palletsprojects.com/en/3.1.x/api/#undefined-types
+.. _`StrictUndefined`: https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.StrictUndefined
+.. _`DebugUndefined`: https://jinja.palletsprojects.com/en/3.1.x/api/#jinja2.DebugUndefined
--- /dev/null
+#####################################
+Podman/Testinfra/Salt Lessons Learned
+#####################################
+
+:date: 2022-11-13
+:tags: lessons,podman,testinfra,salt
+:category: lessons
+:author: Jude N
+
+Oh my, less than two years between posts. I'm on a roll !
+
+I've been looking into using `Podman`_ and `Testinfra`_ to test `Salt`_ states.
+
+I'd like to add some unit tests and a Containerfile to an existing repo of salt states, where running 'pytest' in the repo's workspace would spin up the container and run the tests against it, and then tear down the container.
+
+The tests would run 'salt state.apply' commands against the container, applying different sets of pillar data depending on the test.
+
+Project Directory Structure
+===========================
+
+First let's set up a directory structure for the project that includes the states, their tests, and any needed test data. In the case of salt states, the test dtaa will be pillar files and files served by ext_pillar. The directory structure below is what I ended up using:
+
+::
+
+ ├─ test_project repo
+ ├─── .git
+ ├─── env
+ ├────── ... testinfra egg
+ ├─── Containerfile
+ ├─── setup.cfg
+ ├─── tests
+ ├───── test_*.py
+ ├───── data
+ ├──────── ext_pillar
+ ├──────── pillar
+ ├────────── top.sls
+ ├────────── test_zero.sls
+ ├────────── test_one.sls
+ ├────────── ...
+ ├──────── top.sls
+ ├─── test_project
+ ├───── *.sls
+ ├───── *.jinja
+ ├───── templates
+ ├──────── *.jinja
+ ├───── files
+ ├───── ...
+
+
+Assuming all these files are stored in git, there's a .git directory from when you cloned the repo
+
+The 'env' directory is a python virtualenv under 'env', where the testinfra egg has been installed. You can skip the virtualenv if you're pulling in testinfra from a global package.
+
+Containerfile is, well a Podman Containerfile, and setup.cfg contains some pytest-specific settings.
+
+The tests directory is where the testinfra test\_\*.py files are stored.
+
+The tests/data/pillar directory will end up be mapped to the /srv/pillar directory in the test container. Similarly tests/data/ext_pillar will be mapped to /srv/ext_pillar.
+
+The salt-states directory includes the \*.sls and \*.jinja files, and any other salt-related subdirectories like 'templates', 'files', 'macros', etc. This directory will be mapped to /srv/salt/project in the container.
+
+Containerfile
+-------------
+
+The Containerfile I'm using for this project is below.
+
+.. code-block:: docker
+
+ # I'm using Ubuntu 20.4 for this project-under-test so pull in the stock Ubuntu image for that version
+ FROM ubuntu:focal
+ RUN apt-get update
+
+ # The stock image doesn't include curl, so install it and bootstrap salt
+ # Longterm, I would host the bootstrapping script internally in case that site disappeared.
+ RUN apt-get install -y curl
+ RUN curl -L https://bootstrap.saltproject.io | sh -s --
+
+ # Configure salt run as a masterless minion
+ RUN echo "file_client: local" > /etc/salt/minion.d/masterless.conf
+ RUN printf "local" > /etc/salt/minion_id
+
+ # Set up the /srv/salt environment
+ RUN mkdir -p /srv/salt
+ RUN mkdir -p /srv/ext_pillar/hosts/local/files
+ RUN printf "ext_pillar:\n - file_tree:\n root_dir: /srv/ext_pillar\n" >> /etc/salt/minion.d/masterless.conf
+
+ # Delay setting up /srv/salt/top.sls until the container starts. so PROJECT can be sent in as a ENV
+ RUN printf "printf \"base:\\n '*':\\n - \${PROJECT}\\n\" > /srv/salt/top.sls" >> /root/.bashrc
+
+ # Create a local user
+ RUN useradd local_user
+
+ # The Salt git states apparently assume git is already installed on the host, so install it.
+ RUN apt-get install -y git
+
+Building and verifying the saltmasterless:latest image
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Using this Containerfile, I built a saltmasterless:latest image:
+
+.. code-block:: shell-session
+
+ workstation:~/projects/test_project.git# podman build -t saltmasterless:latest .
+
+Then with this image, I can start a container that includes volumes mapping the tests/data/pillar ro /srv/pillar, tests/data/ext_pillar to /srv/ext_pillar, and test_project to /srv/salt:
+
+.. code-block:: shell-session
+
+ workstation:~/projects/test_project.git# podman run -it --env "PROJECT=test_project" -v ${PWD}/test_project:/srv/salt/test_project -v ${PWD}/tests/data/pillar:/srv/pillar -v ${PWD}/tests/data/ext_pillar:/srv/ext_pillar/hosts/local/files --name test_box --hostname local saltmasterless:latest
+ root@local:/#
+ root@local:/# find /srv
+ root@local:/# exit
+ workstation:~/projects/test_project.git# podman rm -f test_box
+
+setup.cfg
+---------
+The setup.cfg file is mostly used to tell pytest to ignore the salt states directory:
+
+.. code-block
+
+ [tool:pytest]
+ norecursedirs = test_project/files/*
+ addopts = -s
+ log_cli=true
+ log_level=NOTSET
+
+tests/data/pillar/top.sls
+-------------------------
+As mentioned above the tests/data/pillar directory will be mapped to /srv/pillar in the container, but let's look at the top.sls a little closer. From the Containerfile, /etc/salt/minion_id was set to 'local', so normally the top.sls file will end up using /srv/pillar/test_zero.sls for it's pillar data.
+
+But lets say we want to run a test with some other pillar data. In that case , in the test we'll use the salt-call '-id' argument to run the command as a different minion id. So with the top.sls file below, running 'salt-call --local -id=test_one state.apply' will use the test_one.sls pillar data instead of test_zero.sls
+
+.. code-block:: yaml+jinja
+
+ {{ saltenv }}:
+
+ '*':
+ - match: glob
+ - ignore_missing: True
+
+ 'local':
+ - test_zero
+
+ 'test_one':
+ - test_one
+
+ 'missing_mandatory_pillar_item':
+ - missing_mandatory_pillar_item
+
+
+tests/test_project.py host fixture
+----------------------------------
+
+The tests/test_project.py file includes a host fixture based on https://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images. Note that the podman_cmd is pretty much the same as the command used above when testing the container. The cwd-related logic is because the -v args required full path names.
+
+.. code-block:: python
+
+ # scope='session' uses the same container for all the tests;
+ # scope='function' uses a new container per test function.
+ @pytest.fixture(scope='session')
+ def host(request):
+
+ cwd = os.getcwd()
+
+ podman_cmd = "podman run -d -it --env PROJECT=test_project -v ${PWD}/test_project:/srv/salt/test_project -v ${PWD}/tests/data/pillar:/srv/pillar -v ${PWD}/tests/data/ext_pillar:/srv/ext_pillar/hosts/local/files --name test_box --hostname local saltmasterless:latest bash"
+ podman_cmd = podman_cmd.replace("${PWD}",cwd)
+ podman_cmd_list = podman_cmd.split(' ')
+
+ # run a container
+ podman_id = subprocess.check_output(podman_cmd_list).decode().strip()
+ # return a testinfra connection to the container
+ yield testinfra.get_host("podman://" + podman_id)
+
+ # at the end of the test suite, destroy the container
+ subprocess.check_call(['podman', 'rm', '-f', podman_id])
+
+tests/test_project.py full salt run test
+----------------------------------------
+
+Here's a test that does a full salt state.apply on the container. This test is slow, since the container starts are with just salt and git installed, and the project-under-test is making a lot of changes. Note theuse of the '--local' argument to tell salt to try to pull data from a saltmaster.
+
+.. code-block:: python
+
+ def test_full_salt_run(host):
+ print('running salt-call state.apply. This will take a few minutes')
+ cmd_output = host.run('salt-call --state-output=terse --local state.apply')
+
+ print('cmd.stdout: ' + cmd_output.stdout)
+
+ assert cmd_output.rc == 0
+ assert cmd_output.stderr == ''
+
+tests/test_project.py alternative pillar data test
+---------------------------------------------------
+
+In this example, suppose ./test_project/map.jinja included a check like below:
+
+.. code-block:: jinja
+
+ {% if not salt['pillar.get']('mandatory_pillar_item') %}
+ {{ raise('mandatory_pillar_item is mandatory') }}
+ {% endif %}
+
+And then there's a 'missing_mandatory_pillar_item' in the ./test/data/pillar/top.sls as per above, and a ./test/data/pillar/missing_mandatory_pillar_item.sls file exists that's missing the mandatory pillar item.
+
+Then a test like below could force a salt run that uses this pillar data by using the '--id' argument as per below, and an assertion could check the error was raised.
+
+.. code-block:: python
+
+ def test_missing_mandatory_pillar_itemn(host):
+ print('running another salt-call state.apply with bad pillar data.')
+ cmd_output = host.run('salt-call --state-output=terse --local --id=missing_mandatory_pillar_item state.apply')
+ assert "mandatory_pillar_item is mandatory" in cmd_output.stderr
+ assert cmd_output.rc != 0
+
+Running the tests
+=================
+
+The tests are kicked off via 'pytest' like any other python project using pytest.`
+
+.. code-block:: shell-session
+
+ workstation:~/projects/test_project.git# source ./env/bin/activate
+ (env) workstation:~/projects/test_project.git# pytest
+ ...
+ ================================================================================ 3 passed in 333.88s (0:05:33) ================================================================================
+
+
+What's Next
+===========
+
+- Set up the salt bootstrapping so it'll work without having to reach out to bootstrap.saltproject.io
+- Move the host fixture out of /tests/test_project.py to ./tests/conftest.py
+- Speed up the tests. As mentioned above, a full 'salt state.apply' for a project can take a few minutes on my workstation
+
+.. _Podman: https://podman.io/
+.. _Testinfra: https://github.com/pytest-dev/pytest-testinfra
+.. _Salt: https://saltproject.io/
+
--- /dev/null
+TS Stands For Test Station
+##########################
+
+:date: 2022-12-17
+:tags: now-you-know,infrastructure,test-stations
+:category: lessons
+:author: Jude N
+
+Every now and then someone will come along and spraypaint yellow "TS"'s on the sidewalks around the neighorhood with arrows next to them. The arrows they lead to little square metal covers with a hole in the middle.
+
+From `99% Invisible`_, I figured it had something to do with the gas line since it was yellow, and that they was probably some sort of access point under the covers, but I couldn't figure out why they were using 'TS' instead something like 'GL' or 'GAS'.
+
+Recently I found one of the TS's pointing to a more informative cover:
+
+.. image:: {static}/images/test-station.png
+ :align: center
+ :alt: A yellow spraypainted 'TS' pointing to a metal cover including the text 'Test Station'.
+
+Apparently it's the test station for a `Cathodic Protection System`_.
+
+.. _99% Invisible: https://99percentinvisible.org/article/colorful-language-decoding-utility-markings-spray-painted-on-city-streets/
+.. _Cathodic Protection System: https://www.usbr.gov/tsc/training/webinars-corrosion/2014-02_TestingCathodicProtectionSystems/2014-02_TestingCathodicProtectionSystems_slides_508.pdf
--- /dev/null
+TakeMyBlood lessons learned
+###########################
+
+:date: 2022-08-22
+:tags: lessons
+:category: lessons
+:author: Jude N
+:status: draft
+
+I'm a frequent blood donor, but I don't like the `Red Cross donation site`_, and don't get me started on their Android app. I could go off on this for a while. but lets just say I've
+rage-quit setting up a donation appointment more than once.
+
+When I came across the Julia Evans `How to use undocumented web APIs`, I looked into whether I could come up with some way of finding donation sites that worked better for me.
+
+
+
+
+
+.. _Red Cross donation site: https://www.redcrossblood.org/give.html
+.. _How to use undocumented web APIs: https://jvns.ca/blog/2022/03/10/how-to-use-undocumented-web-apis/
-Jude Nagurney
-#############
+JUDE NAGURNEY
+=============
-.. :date: 2019-07-14
+.. :date: 2025-01-30
.. :tags: resume
.. :category: resume
.. :author: Jude N
-.. :status: hidden
+.. :status: published
-| 1136 Forest Edge Drive
-| Reston, Virginia, 20190
-| Phone: (703) 829-0744
-| jude.nagurney@gmail.com
+Reston, VA \| (703)-403-4741 \| jude.nagurney@gmail.com \|
+`linkedin.com/in/judenagurney <https://linkedin.com/in/judenagurney>`_
-=======
-Summary
-=======
+Staff Engineer \| Senior Software Engineer \| DevOps and System Integration Specialist
+--------------------------------------------------------------------------------------
-| I'm a results-oriented software engineer with a strong focus on agile and devops processes.
+Seasoned Senior Software Engineer and Staff Engineer with a record of
+driving impactful engineering solutions in backend development, CI/CD
+pipelines, and complex system integrations.
-=================
-Technical Skills:
-=================
+Renowned for tackling undocumented technical challenges with tenacity
+and delivering innovative, scalable solutions that optimize workflows,
+enhance team efficiency, and strengthen production reliability.
-- **Languages** : Python, C/C++, Ruby, Perl, Java, SQL, Bash, lua, Expect, Tcl/Tk, UML OCL, COBOL
-- **Tools** : Puppet, Salt, Cobbler, Jenkins, emacs, vi, Jira, git, Gitlab, Docker, Mercurial, Subversion, Jira, SELinux
-- **Frameworks** : Django, Angular, Pylons, Rails, SqlAlchemy
-- **Operating Systems** : Linux (Ubuntu, Debian, RedHat, CentOS, Raspbian), Microsoft Windows, vxWorks, Solaris
-- **Databases** : PostgresSQL, MySQL, Oracle
-- **Standards Expertise** : SONET, SDH, TL1, LMP
+Expertise in implementing automation-first DevOps practices, championing
+infrastructure-as-code, and fostering high-performance engineering
+teams.
-----
+Trusted as a proactive problem-solver and mentor, consistently delivering
+results that exceed expectations in fast-paced, agile environments.
-===============
-Work Experience
-===============
+Key Achievements
+----------------
-------------------
-Layer 2 Technology
-------------------
-| **Reston Virginia**
-| **October 2016 - Present**
-
------------------
-Software Engineer
------------------
-
-Developed and maintained Python-based software projects
-
-- Developed an SMS-based solution for the Netgear LTE Mobile Horspot Router
- This included a deep dive into the AT modem commands used for sending and receiving SMS messages.
-
-- Developed SMS-based solutions using services such as Twilio, Plivo, Nexmo, and Vitelity.
-
-- Wrote a Errbot plugin for reporting open merge requests that were waiting for peer reviews.
-
-- Worked on porting projects to Raspbian to run on a Raspberry Pi 3 Model B.
- This included rebuilding packages for the arm7 architecture.
-
-- Developed and maintained Salt states and Puppet manifests for various projects.
-
-- Developed and maintained Jenkins continuous integration jobs for various projects.
- Also proactively tracked down the root causes of build failures when the jobs failed.
-
----------------------
-Applied Security Inc.
----------------------
-| **Reston Virginia**
-| **March 2010 – October 2016**
-
-.....................................
-Software Engineer (Development Group)
-.....................................
-| **April 2016 - October 2016**
-
-Wrote Python code for new projects and extended existing Python code bases
-
-- Extended a project to dynamically allocate AWS hosts based on system usage.
-
-- Wrote Python code for sending and receiving SMS messages through Plivo and Twilio
-
-- Developed and maintained Puppet manifests for development projects
-
-..................................
-Software Engineer (Security Group)
-..................................
-| **March 2014 - April 2016**
-
-Extended devops practices to cover security reviews
-
-- Introduced an SELinux strict policy workflow allowing developers to do most of the work associated with setting up a policy.
- Previously all policy work was done by a single engineer. Now policy work can be distributed across the development team.
-
-- Continued supporting puppet infrastructure for both the dev and ops environments, especially with respect to security-related changes.
-
-.....................................
-Software Engineer (Engineering Group)
-.....................................
-| **March 2012 - March 2014**
-
-Introduced 'infrastructure-as-code' to the ASI Engineering group.
-
-- Introduced Puppet and Cobbler provisioning into the Engineering workflow, cutting down the time it took from them to bring up new data centers drastically, and increasing consistency across all data centers.
-
-- Captured the state of the existing Engineering infrastructure in Puppet manifests
-
-- Introduced git and rpm packaging to internal Engineering projects
-
-- Liaison between development and operations, especially with respect helping development write code that wouldn't be denied against operation's SELinux policies.
-
-........................................
-Software Engineer (Web Technology Group)
-........................................
-| **March 2010 - March 2012**
-
-Managed, developed and maintained the Web Technology infrastructure environment.
-
-- Deployed Puppet across the WT infrastructure machine (DNS, Jenkins, Mercurial, Cobbler) as well as project-specific build server and test machines. Wrote scripts for monitoring the health of the puppet infrastructure.
-
-- Designed the architecture for a custom internal cloud for quickly building stacks of test VMs based on Puppet, Cobbler, PDNS, and VMWare ESX, and deployed a majority of the components.
-
-- Extended the number of Jenkins jobs to cover all WT projects, including building rpm/deb packages, publishing the packages to an internal repository, and then installing the packages from the rep to test machines.
-
-- Wrote and maintained a script to verify Jenkins jobs were configured consistently across WT.
-
-- Maintained the WT DNS zones and monitored the accuracy of the DNS records over time.
-
-- Tuned the WT VMWare ESX servers and performed troubleshooting on slow VMs.
-
-- Developed and maintained the packaging code of WT project
-
-- Wrote and maintained RPM spec files for CentOS-based projects and Debian build directories for Ubuntu-based projects.
-
-- Wrote and maintained /etc/init.d/ scripts for many WT projects.
-
-- Maintained yum and apt package repositories
-
-- Designed, developed and maintained a shared report building tool.
-
-- Designed and implemented a Django application for creating ad-hoc reports.
-
-- Designed and implemented Django and Pylons clients for the reporting tool.
-
-----
-
--------------
-NeuStar, Inc.
--------------
-| **Sterling, Virginia**
-| **March 2009 – February 2010**
-
-.........................................
-Software Engineer III (UltraDNS Services)
-.........................................
-
-Maintained the UltraDNS XML API and AXFR services.
-
-- Designed, implemented and deployed a system for maintaining secondary zone TSIG keys
+Reduced Merge Request Review Times: Designed and implemented the
+Margebot chatbot, which automated reminders for open merge requests,
+reducing review timelines from over a week to just two days. This
+initiative significantly improved team efficiency and streamlined the
+software release process.
-- Extended and maintained the UltraDNS Python-based XMLRPC API .
+Revolutionized SELinux Workflows: Introduced and implemented a team-wide
+SELinux strict policy workflow, previously handled by a single engineer.
+This innovation eliminated bottlenecks, increased team resiliency, and
+enabled seamless policy creation across the development team.
-- Extended and maintained the UltraDNS AXFR/IXFR zone transfer utility, written in C++.
+Pioneered Infrastructure-as-Code Practices: Transformed the software
+delivery pipeline by integrating industry-standard tools like Puppet,
+Jenkins, and rpm packaging. This ensured consistent testing
+environments, minimized production errors, and fostered trust between
+development and production teams.
-- Extended and maintained a utility for gathering DNS query timing statistics.
+Optimized CI/CD and Monitoring Processes: Spearheaded the integration of
+automated monitoring checks directly into software deliveries, enhancing
+incident response times and operational stability.
-- Worked on setting up consistent build procedures across the UltraDNS product line.
+Introduced Staff Engineer Role Framework: Advocated for and established
+the Staff Engineer career path within the organization, retaining top
+engineering talent by providing growth opportunities outside of
+management. This initiative strengthened the company’s engineering
+leadership and improved employee retention.
-- Worked closely with off-site engineers in Arizona and India.
-
-----
-
----------------
-StackSafe, Inc.
----------------
-| **Vienna Virginia**
-| **November 2006 – January 2009**
-
-..............................................
-Senior Software Engineer (Test Center Product)
-..............................................
-
-Designed, developed and maintained StackSafe's flagship Test Center product, which was awarded the 2008 ITIL Innovation of the Year.
-
-- Designed, implemented, maintained, and documented the product's TurboGears-based licensing system, including the design of its PostgreSQL database, and the sqlalchemy-migrations needed between releases.
-
-- Designed, implemented, and maintained the licensing and upgrade portions of the product's Rails-based GUI, including a Ruby-based cron job which would occasionally poll the upgrade server for new releases.
-
-- Designed, implemented, and maintained the product's Ruby-based command line interface.
-
-- Developed and maintained the products Python-based storage daemon, which was capable of surfacing a virtual machines QEMU disk image over the network by using qemu-nbd and nbd-client.
-
-- Helped develop and test the product's Python-based management daemon which was responsible for starting and stopping virtual machines.
-
-- Performed root cause analysis after build failures, sometimes having to dig pretty deep into code I was not written , including Python, C++, Ruby , and Perl code and bash scripts Many times these failures turned out to be locking/synchronization issues between various system components.
-
-- Maintained the Debian packages and apt-get repository using reprepro.
-
-- Implemented and maintained the installation scripts associated with the products deb-based packaging.
-
-- Customized the Debian install process to install our product along with the normal Ubuntu server installation, and to verify that the host machine supported virtualization. Since the Debian install process is not documented very well, this usually involved having to walk through the Debian-installer source to find out how it worked.
-
-- Designed, implemented and maintained the product's build environment, including a Python based nightly-build script which built all the source, loaded it onto the appropriate test machines, and run the smoke tests.
-
-- Acted as the primary QA engineer until a full time tester was hired, leading bi-weekly bug scrubs, and making sure all the outstanding issues were resolved before cutting a release.
-
-- Championed unit testing as an integral part of the normal development environment
-
-- Participated in code reviews, and monitored the SVN commit notices for questionable commits, especially after build failures.
-
-- Used Puppet to maintain the configuration Engineering lab's collection of build and test machines.
-
-- Submitted patches and bugs against the open source projects we were using on the product.
-
-- Worked closely with off-site engineers in California and New Jersey.
-
-----
-
--------------
-Cisco Systems
--------------
-| **Herndon Virginia**
-| **November 2000 – September 2006**
-
-..........................
-Lead Engineer(LMP Feature)
-..........................
-
-Led development LMP (RFC 4204) feature on the 15454DWDM multi-service transport platform.
-
-- Wrote design document and test plans for the LMP feature.
-
-- Implemented the IDL, CORBA layer, and TL1 (Transaction Language 1 – a widely used telecommunications management protocol) interface code for LMP feature
-
-- Tracked incoming defect reports for the LMP implementation
-
-- Participated in successful interoperability tests with the Calient PXC at the KDDI research labs outside Tokyo. Fixed and retested minor issues on-site during testing.. KDDI was very impressed with the quick turnaround time, saying it had taken a competitor months to make similar changes.
-
-- Trained support engineers in the LMP feature during technology transfer
-
-.................................
-Lead Engineer (15600 TL1 Feature)
-.................................
-
-Led development for the TL1 interface for the 15600 multi-service transport platform.
-
-- Assigned priorities for TL1-related bugs on the 15600 platform to a team of 7 engineers located in California, Texas, Italy, and India. Remove roadblocks associated with fixing the problems. Adjusted workloads to keep engineers from becoming swamped or burned out. Participated in most code reviews related to the 15600’s TL1 interface.
-
-- Removed 100K SLOC by aligning divergent code bases between the 15454 and 15600 platforms. The common code base freed up engineers who had previously been dedicated to either the 15454 or 15600. Fixing a bug or extending one platform also ended up being reflected on the other platform as well.
-
-- Worked closely with other Cisco sites in Texas, North Carolina, and California, as well as offshore developers in India, and Italy.
-
-________________________________________________
-Software Engineer(15327, 15454, 15600 platforms)
-________________________________________________
-
-Provided full life cycle support across a number of Cisco’s multi-service transport platforms
-
-- Wrote and maintained an extensive TL1 regression test suite in Expect. The test suite originally was meant to provide early testing for OSMINE deliverables. The tests were so successful, they began being used as a stability metric after branch syncs or merges. Before the tests were available, it might be weeks before a sync error was discovered. After the tests were implemented, sync errors were noticed within a day.
-
-- Implemented Telcordia’s COPY-RFILE feature. This was the first time the feature was developed by a vendor. Worked closely with Telcordia engineers to work out the kinks in their specification.
-
-- Resolved 800+ defects over 5 years, making me on of the top 5 contributors on the team. Wrote 700+ defect reports over 5 years despite not being a QA tester. Posted more defects than most of the dedicated testers.
-
-- Participated in multiple OSMINE certification cycles. The OSMINE testing cycle forced code delivery to Telcordia months before the software went through formal QA. This test suite guaranteed high quality software was delivered despite the lack of QA. OSMINE certification cost ~6M, so an unsuccessful certification effort would have been extremely expensive.
-
-- Had to carry a pager once or twice a year on a rotating basis in case any serious problems happened in the field that the regular support engineers couldn't resolve.
-
-----
-
-----
-CACI
-----
-| **Fairfax, Virginia**
-| **March 1998 – November 2000**
-
-....................................
-Senior Systems Analyst / Task Leader
-....................................
+Area of Expertise
+-----------------
-_________________________
-Web Invoice System (WinS)
-_________________________
+Technical Skills: Python \| C++ \| Ruby \| SQL \| Bash \| JavaScript \|
+Typescript \| Lua \| SaltStack \| Puppet \| Docker \| Jenkins
-Developed Oracle stored procedures for digital signature application
+Infrastructure and Automation: Infrastructure-as-Code \| CI/CD Pipelines
+\| Automation Tools \| System Integration \| Monitoring Frameworks
+(Nagios, Zabbix)
-- Migrated embedded SQL in a Java servlets application to use stored procedures, allowing the database developers to focus on the SQL code, and the Java developers to focus on the servlet code.
+DevOps and Cloud: AWS \| Virtualization (VMWare, QEMU) \|
+Containerization \| Deployment Automation \| Configuration Management
-- Developed regression test plan for stored procedures.
+Database Management: PostgreSQL \| MySQL \| Oracle \| SQLite \| Database
+Schema Design
-___________
-DIFMS/NIMMS
-___________
+Frameworks and Platforms: Django \| Angular \| TurboGears \| SqlAlchemy
+\| Rails \| Pylons
-Developed the project's C++ framework, and provided custom tools to support various aspects of the life cycle on a COBOL reengineering project
+Systems and OS: Linux (Ubuntu, Debian, RedHat, CentOS, Raspbian) \|
+Windows \| SELinux Policy Implementation
-- Developed framework for server side batches in C++
+Collaboration and Leadership: Technical Mentorship \| Cross-Functional
+Collaboration \| Knowledge Sharing \| Team Development
-- Implemented an engine for translating Rational Rose class diagrams into Oracle DDL scripts
+Problem-Solving and Innovation: Troubleshooting \| Root Cause Analysis
+\| Process Optimization \| Workflow Automation
-- Implemented a Microsoft Word template for capturing business rules which were then ported to Rational Rose
+PROFESSIONAL EXPERIENCE
+-----------------------
-- Implemented a database monitoring tool in Perl to ensure various design decisions were being maintained in the database schema
+Staff Engineer – L2T, LLC, Herndon, VA February 2023 - December 2024
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Implemented a regression test system for the batches which was incorporated into the nightly builds.
+- Enhanced onboarding efficiency by creating a series of structured,
+ project-based orientation presentations, equipping new employees with
+ the tools and knowledge needed to integrate seamlessly into the team.
-----
+- Rescued and revitalized a dormant, mission-critical accounting
+ project, becoming the primary point of contact and ensuring
+ uninterrupted service, which restored stakeholder confidence and
+ minimized operational risks.
-----------------------
-RS Information Systems
-----------------------
-| **McLean Virginia**
-| **June 1995 – February 1998**
+- Spearheaded the deployment pipeline by tagging new software releases
+ and implementing streamlined production processes, accelerating
+ release cycles while maintaining high-quality standards.
-....................................
-Senior Systems Analyst / Task Leader
-....................................
+Senior Software Engineer – L2T, LLC, Herndon, VA October 2016 - February 2023
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-____________________________________________________
-Midwest Electronic One-Stop Shopping Service (MEOSS)
-____________________________________________________
+- Introduced and advocated for the Staff Engineer role, securing
+ leadership buy-in and formalizing a career path for senior engineers,
+ fostering talent retention and enhancing organizational structure.
-Developed a credentialing and permitting system used by the trucking industry and state motor vehicle departments in seven Midwestern states.
+- Drove code quality and knowledge sharing through meticulous software
+ change reviews, enforcing robust standards, and creating a
+ collaborative engineering culture focused on continuous improvement.
-- Designed and implemented the mapping between the database schema and the EDI documents, including the design of the database schema
+- Engineered advanced SMS-based solutions using Teltonika 4G routers
+ (RUT360, RUT240, RUT241) and Netgear Nighthawk routers to integrate
+ reliable, scalable messaging capabilities, enabling remote system communication.
-- Developed the installation process for state and industry versions of the software.
+- Automated complex deployments by developing and maintaining Salt
+ states for Nextcloud, improving system scalability, reducing manual
+ effort, and enhancing operational efficiency.
-- Lead a team of four PowerBuilder developers
+- Streamlined CI/CD pipelines by developing Jenkins jobs and automating
+ deployment processes with Salt states and Puppet manifests, improving
+ release efficiency and system reliability.
-__________________________________________________
-Virginia/Maryland CVISN Pilot Credentialing System
-__________________________________________________
+- Enhanced engineering workflows by creating an Errbot plugin to track
+ open merge requests, reducing review timelines from weeks to days and
+ ensuring smoother project deliveries.
-Ported the MEOSS Credentialing System to the states of Virginia and Maryland
+- Optimized system compatibility by porting key projects to Raspbian
+ for Raspberry Pi 3 Model B, including rebuilding packages for arm7
+ architecture to support diverse hardware requirements.
-- Added client/server capabilities to the state portion of the MEOSS credentialing system
+- Modernized monitoring and troubleshooting frameworks with Nagios,
+ ensuring real-time issue detection and maintaining high operational
+ performance across the development network.
-- Developed a direct-dial communications subsystem to bypass VAN charges.
+- Mentored and developed junior engineers, fostering a collaborative
+ team environment and equipping them with the skills to tackle complex
+ technical challenges effectively.
-- Developed a Perl script to diagnose common problems in ODBC.INI files.
+- Automated Android workflows with the Automate app, enabling efficient
+ device control and monitoring, and delivered robust production
+ support through root cause analysis and issue resolution.
-____________________________________________________
-IFTA Clearinghouse / Quarterly Tax Submission System
-____________________________________________________
+Software Engineer (Development Group) – Applied Security Inc., Reston, VA April 2016 – October 2016
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Developed a system for gathering fuel tax data from states.
+- Designed scalable cloud solutions by extending an VMWare-based project
+ to dynamically allocate hosts based on system usage,improving
+ resource utilization and reducing operational costs.
-- Implemented the mapping between the database schema and the ANSI X12 813 EDI file format.
+- Developed SMS communication capabilities by writing Python-based
+ scripts for sending and receiving messages through Plivo and Twilio,
+ enabling seamless messaging integration into applications.
-- Developed the system for importing EDI files into a DB2 database.
+- Automated infrastructure deployment by developing and maintaining
+ Puppet manifests, enhancing system consistency and streamlining
+ development and deployment processes.
-- Lead a team of 2 C++ developers
+Software Engineer (Security Group) – Applied Security Inc., Reston, VA March 2014 – April 2016
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Developed the VAN communication system for sending and receiving EDI files
+- Transformed security operations by introducing a groundbreaking
+ SELinux strict policy workflow, enabling developers to independently
+ manage policy creation, eliminating bottlenecks, and enhancing
+ team-wide efficiency and resilience.
-______________________________________________
-Hazardous Material Registration System (HARPS)
-______________________________________________
+- Integrated security into DevOps pipelines by extending automated
+ processes to include rigorous security reviews, proactively
+ mitigating vulnerabilities and ensuring compliance with industry
+ standards.
-Developed a system allowing states to share hazardous material registration information.
+- Fortified infrastructure reliability by maintaining and advancing
+ Puppet configurations across development and operations environments,
+ with a strategic focus on implementing and automating critical
+ security updates.
-- Gathered system requirements and wrote the technical specification document.
+Software Engineer (Engineering Group) – Applied Security Inc., Reston, VA March 2012 – March 2014
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Mapped database schema to the applicable ANSI X12 EDI formats.
+- Pioneered infrastructure-as-code practices by introducing Puppet and
+ Cobbler provisioning, drastically reducing data center setup times
+ and ensuring consistency across all operational environments.
-----
+- Standardized infrastructure management by capturing the existing
+ Engineering environment in Puppet manifests, streamlining deployment
+ processes and enabling robust configuration control.
-==========
-Education:
-==========
+- Modernized engineering workflows by implementing Git for version
+ control and rpm packaging, enhancing collaboration and accelerating
+ development cycles across internal projects.
-----------------------------------------------------------------------
-George Mason University, Information Technology and Engineering School
-----------------------------------------------------------------------
-Masters of Science / Computer Science / May 2006
+- Bridged development and operations teams by acting as a liaison,
+ ensuring code compatibility with SELinux policies, minimizing
+ deployment roadblocks, and fostering cross-functional collaboration.
-------------------------------------------
-Cornell University, College of Engineering
-------------------------------------------
-Bachelor of Science / Computer Science / May 1990
+Software Engineer (Web Technology Group) – Applied Security Inc., Reston, VA March 2010 – March 2012
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- Transformed infrastructure management by deploying Puppet across DNS,
+ Jenkins, Mercurial, and Cobbler systems, automating server
+ configuration and ensuring consistent performance across
+ project-specific build and test environments.
-=====================
-Open Source Projects:
-=====================
+- Architected a custom internal cloud solution leveraging Puppet,
+ Cobbler, PDNS, and VMware ESX to enable rapid creation of test VM
+ stacks, significantly improving testing efficiency and deployment
+ speed.
+
+- Expanded Jenkins automation capabilities by designing and extending
+ jobs to build rpm/deb packages, publish them to internal
+ repositories, and deploy to test machines, streamlining CI/CD
+ pipelines.
+
+- Developed scalable packaging solutions by writing and maintaining RPM
+ spec files for CentOS and Debian build directories for Ubuntu,
+ ensuring efficient deployment across diverse environments.
+
+- Built robust automation scripts for configuration consistency and
+ system initialization, including /etc/init.d/ scripts for various
+ projects, while maintaining yum and apt repositories to support
+ smooth package management.
+
+- Created advanced reporting tools by designing Django and Pytons-based
+ applications for generating ad-hoc reports, improving data
+ accessibility and decision-making processes.
+
+Software Engineer III – NeuStar Inc., Sterling, VA March 2009 – February 2010
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Secured the DNS key management system with transaction signatures
+ (TSIGs) as per RFC-2845, ensuring robust security for secondary zone
+ transfers and compliance with industry standards.
+
+- Advanced DNS scalability and performance by extending and optimizing
+ the UltraDNS Python-based XMLRPC API and AXFR/IXFR zone transfer
+ utility (C++), enabling faster, more reliable domain updates across
+ environments.
+
+- Enhanced query diagnostics by developing a utility to capture DNS
+ query timing statistics, improving real-time performance monitoring
+ and reducing troubleshooting timelines across critical systems.
+
+- Standardized build automation by establishing consistent build
+ procedures across the UltraDNS product line, fostering operational
+ efficiency and seamless collaboration with off-site teams in Arizona
+ and India.
+
+Senior Software Engineer (Test Centre Product) – StackSafe Inc., Sterling, VA November 2006 – January 2009
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Led development of StackSafe’s flagship Test Center product, earning
+ the 2008 ITIL Innovation of the Year Award for revolutionizing IT
+ testing and operational reliability.
+
+- Designed and implemented a TurboGears-based licensing system,
+ including PostgreSQL database architecture and sqlalchemy migrations,
+ ensuring seamless licensing workflows and release updates.
+
+- Optimized software deployment by building Rails-based GUI licensing
+ and upgrade features, supported by an automated Ruby cron job to
+ manage server checks for new releases.
+
+- Enhanced virtual storage integration by creating a Python-based
+ storage daemon, enabling network access to virtual machine QEMU disk
+ images using qemu-nbd and nbd-client.
+
+- Strengthened virtualization management through contributions to the
+ Python-based management daemon, improving scalability and ensuring
+ efficient virtual machine lifecycle operations.
+
+- Automated deployment processes by customizing the Debian installer to
+ streamline product-specific installations and verify virtualization
+ readiness for host systems.
+
+- Streamlined CI/CD pipelines by engineering a Python-driven
+ nightly-build system that compiled source code, deployed it to test
+ environments, and executed smoke tests for quality assurance.
+
+- Championed QA excellence by conducting root cause analyses of build
+ failures across multiple languages (Python, C++, Ruby,Perl), driving
+ stability through unit testing and bi-weekly bug scrubs.
+
+- Ensured infrastructure reliability by managing engineering lab
+ configurations and automating package management with Puppet, while
+ contributing patches to key open-source projects.
+
+- Fostered cross-functional collaboration by coordinating with off-site
+ engineering teams in California and New Jersey, aligning development,
+ testing, and delivery efforts to meet deadlines and quality
+ standards.
+
+Software Engineer – Cisco Systems, Herndon, VA November 2000 – September 2006
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Led the development of the LMP (RFC 4204) feature for the 15454 DWDM
+ multi-service transport platform, driving innovation in
+ telecommunications protocols and enhancing product capabilities.
+
+- Designed and implemented the LMP interface, including the IDL, CORBA
+ layer, and TL1 (Transaction Language 1)code, ensuring seamless
+ protocol communication and system integration.
+
+- Streamlined defect resolution by tracking and addressing incoming
+ defect reports for the LMP implementation, maintaining product
+ quality and ensuring adherence to high engineering standards.
+
+- Achieved interoperability success during tests with the Calient PXC
+ at the KDDI research labs in Tokyo, resolving issues on-site within
+ days—a task competitors required months to complete, earning
+ recognition for exceptional turnaround time.
+
+- Enhanced team knowledge transfer by training support engineers on the
+ LMP feature, ensuring smooth deployment and long-term maintenance of
+ the system.
+
+- Assigned priorities for TL1-related bugs on the 15600 platform to a team of 7
+ engineers located in California, Texas, Italy, and India. Remove roadblocks
+ associated with fixing the problems.
+
+- Removed 100K SLOC by aligning divergent code bases between the 15454 and 15600
+ platforms. The common code base freed up engineers who had previously been
+ dedicated to either the 15454 or 15600. Fixing a bug or extending one platform
+ also ended up being reflected on the other platform as well.
+
+- Wrote and maintained an extensive TL1 regression test suite in Expect. The test
+ suite originally was meant to provide early testing for OSMINE deliverables. The
+ tests were so successful, they began being used as a stability metric after branch
+ syncs or merges. Before the tests were available, it might be weeks before a sync
+ error was discovered. After the tests were implemented, sync errors were noticed within a day.
+
+Previous Positions
+------------------
-Certmaster (http://github.com/jude/certmaster) (2015-Present)
-- Forked the Fedora Certmaster project adding support for multiple CAs and hash functions other than sha1
+- Software Engineer (15327, 15454, 15600 Platforms) – Cisco Systems
+ Herndon, VA
-Haskell Augeas FFI Bindings (http://trac.haskell.org/augeas/ (2009-Present)
+- Senior System Analyst – CACI, Fairfax, VA
-- Provided foreign function interface bindings so Haskell users could easily use the Augeas library
+- Senior System Analyst – RS Information Systems, McLean, VA
-python-module-for puppet (http://github.com/jude/python-module-for-puppet/tree/master) 2009
+OPEN SOURCE PROJECTS
+--------------------
-- Extended Python packaging support in Puppet to include installation of specific package versions
+- Certmaster (2015 – Present): Enhanced the Fedora Certmaster project
+ by adding support for multiple Certificate Authorities (CAs) and
+ advanced hash functions beyond sha1, increasing its flexibility and
+ security.
+- Haskell Augeas FFI Bindings (2009 – Present): Developed foreign
+ function interface bindings, enabling Haskell users to seamlessly
+ interact with the Augeas library for configuration file management.
-Pwan OCL Library: (http://sourceforge.net/projects/pwan) (1999-2000)
+- Python-Module-for-Puppet (2009): Extended Python packaging support in
+ Puppet to enable the installation of specific package versions,
+ improving modularity and version control.
-- Developed a YACC parser for the UML Object Constraint Language version 1.3
+- Contributions on GitHub (2009 – Present): Actively contributed to
+ various open-source projects, submitting patches and enhancing
+ functionality across diverse repositories.
-Various other patches on github (https://github.com/jude?tab=activity) (2009-Present)
+EDUCATION
+---------
+Master of Science in Computer Science \| George Mason University,
+Information Technology and Engineering School ’06
+Bachelor of Science in Computer Science \| Cornell University, College
+of Engineering ’90
TRANSLATION_FEED_ATOM = None
FEED_RSS = 'feeds/rss.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
-CATEGORY_FEED_RSS = 'feeds/cat.%s.rss.xml'
-TAG_FEED_RSS = 'feeds/tag.%s.rss.xml'
+CATEGORY_FEED_RSS = 'feeds/cat.{slug}.rss.xml'
+TAG_FEED_RSS = 'feeds/tag.{slug}.rss.xml'
FEED_MAX_ITEMS = 20
# Blogroll
TAG_CLOUD_STEPS = 4
TAG_CLOUD_MAX_ITEMS = 100
-THEME = "../pelican-themes/pelican-mockingbird"
+#THEME = "../pelican-themes/pelican-mockingbird"
+THEME = "../pelican-themes/Flex"
+THEME_COLOR = 'dark'
+USE_LESS = False
+MAIN_MENU = True
+SITELOGO = None
+SITELOGO = '/blog/images/profile.png'
+FAVICON = '/blog/images/favicon-16x16.png'
+SUMMARY_MAX_LENGTH = 0
DISPLAY_PAGES_ON_MENU = True
+PLUGIN_PATHS = ['../pelican-plugins']
+PLUGINS = ['plantuml']
+
+MENUITEMS = (
+ ("Archives", "/blog/archives.html"),
+ ("Categories", "/blog/categories.html"),
+ ("Tags", "/blog/tags.html"),
+)
+
+SOCIAL = (
+ ("github", "https://github.com/jude"),
+ ("rss", "blog/feeds/all.rss.xml"),
+ ("mastodon", "https://aleph.land/@pwan")
+)
TRANSLATION_FEED_ATOM = None
FEED_RSS = 'feeds/rss.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
-CATEGORY_FEED_RSS = 'feeds/cat.%s.rss.xml'
-TAG_FEED_RSS = 'feeds/tag.%s.rss.xml'
+CATEGORY_FEED_RSS = 'feeds/cat.{slug}.rss.xml'
+TAG_FEED_RSS = 'feeds/tag.{slug}.rss.xml'
FEED_MAX_ITEMS = 20
+DISPLAY_PAGES_ON_MENU = True
+
DELETE_OUTPUT_DIRECTORY = True
+PLUGIN_PATHS = ['../pelican-plugins']
+PLUGINS = ['plantuml']
+
+
# Following items are often useful when publishing
#DISQUS_SITENAME = ""