Hopefully you can’t notice it much, but I recently ported this blog from Pelican to Hugo.
There were a few reasons driving this:
I was using reStructuredText and wanted to move to something supporting markdown. Pelican supports both formats so that wasn’t enough to move to another platform.
The provisioning for my Pelican environment wasn’t captured anywhere and the thought of capturing the state of the Python packages and Pelican themes was daunting. The single-executable Gitea program has been working out well, so I hoped the same with the single-executable Hugo package.
Had I poked around a little on the web first, I may have followed the approach from this article on using Pelican with nix, but I ended up with a pretty basic flake.nix setup instead.
I wanted to try out something new. OK, so maybe that’s the only reason I migrated.
At first, I tried using the Hugo flex theme, but it didn’t support that sweet left sidebar, so I went with the Hugo hyde theme instead and overrode the layout and CSS until it looked as close as possible to the old blog.
This is another area where if I had spent more time looking for themes I probably would have found one closer to what I was already using. But tweaking the hyde theme gave me a excuse to realize how much I still dislike working with CSS.
Go templates are so ugly
Tweaking the theme layouts was my first exposure to Go templates. I get that they’re popular since they’re in the Go standard library, and they’re designed for speed, but after working with Jinja for many years I didn’t find them easy to work worth. Some weirdness I ran into:
Prefix notation for conditionals ala {{ if eq $value 1}}.
Use of index instead of array-style syntax ala {{ index some_dict value }} instead of {{ some_dict[value] }}
I wrote a quick and dirty script to port the Pelican content over to Hugo.
The script extracted the Pelican-style meta data, then called pandoc to convert the reStructuredText to markdown, and then added Hugo-style metadata back to the post.
This got the content about 90% converted. One problem I ran into was that Hugo uses draft: true to signify draft posts, where Pelican uses status: Draft. I fixed these manually since I have so few posts.
The second issue was that pandoc’s choice of syntax tags on the code blocks was often wrong, and again I fixed these blocks manually.
Click here to see the full migration script in all it's glory.
# convert the pelican rst files to Hugo mdimportjsonimportosimportreimportsubprocessimportsysimportyamlre_title=re.compile(r'^\#*$')re_meta=re.compile(r'^:([^:]+):\s+(.*)$')defconvert_meta(filename):meta_data={}regular_data=""title=''withopen(filename,"r")asa_file:# Going to do this quick and dirty.# Read the file line by line and look ones that match ':something:fora_lineina_file:iftitle=="":title_match=re_title.match(a_line)ifnottitle_match:title=a_line.rstrip()meta_data['title']=titlea_match=re_meta.match(a_line)ifa_match:value=a_match.group(2)if','invalue:rhs=value.split(',')else:rhs=valuemeta_data[a_match.group(1)]=rhselse:regular_data+=a_linemeta_dump=yaml.safe_dump(meta_data,default_flow_style=False)temp_filename="/tmp/temp.rst"withopen(temp_filename,"w")astemp_file:temp_file.write(regular_data)name,extension=os.path.splitext(filename)new_name=name+".md"# Have pandoc convert the file and read the output to a stringsubprocess.run('pandoc -f rst -t markdown -o /tmp/temp.md /tmp/temp.rst',shell=True)withopen("/tmp/temp.md","r")astemp_out:temp_contents=temp_out.read()withopen(new_name,"w")asout_file:out_file.write('---\n')out_file.write(meta_dump)out_file.write('---\n\n')out_file.write(temp_contents)# Walk through all the rst files under the directory.forroot,dirs,filesinos.walk('.'):fora_fileinfiles:ifa_file.endswith('.rst'):convert_meta(root+"/"+a_file)
Preprocessing of diagrams
I had one post in the old blog that used the Pelican PlantUML plugin, but Hugo doesn’t have something comparable. I poked around with converting that diagram to Mermaid as per https://gohugo.io/content-management/diagrams/, but pulling in a mermaid.js file that was over two megs to render a single static diagram seemed like overkill.
Instead I switched to preprocessing the diagram with PLantUML and updating the post to show the SVG image instead.
Here’s my Makefile for running plantuml on any of the diagrams that need updating. I went with make so I won’t have to re-render unchanged diagrams on each blog post. Yes, I spent more than five seconds remembering how to use pattern rules even though I currently have only one PlantUML diagram.
It didn’t consider using GoAT diagrams since I wanted the diagrams to be in GraphViz style so I didn’t have to worry about layout issues.
Vale is pretty nice
I also started using Vale as a prose linter and it’s working out OK after I turned off the ’no first person’ and ’no use of the verb to-be’ rules. Here’s the .vale.ini I’m currently using:
Most of these are minor UI choices Gitea made that I don't like.
There's certainly nothing here to force me to switch to another forge.
I don't really like that you have to open issues against a repo,
and that there's not a good way to move issues between repos.
If you end up opening an issue against the wrong repo, you have to
close the first issue and open a second issue in the correct repo.
For some reason, I always have trouble adding new users to a repo.
It's done through the
https://<git-server>/<organization-name>/<repo-name>/settigs/collaboration
page.
You can also define a collection of users called a 'team' for an
organization at https://<git-server>/<organization-name>/teams and
then add the team to the repo.
I don't like the 'collaborator' terminology since it makes me
feel like a quisling. 🤷
Gitea can convert issue instances to links, but you have to use a
kludgy '<organization-name>/<repo-name>#<issue-number>'
format.
It would be better if the organization-name and repo-name weren't
needed and defaulted to the current repo's values.
I always have trouble finding the commit graph for a repo.
Go to the '<> Code' link for the repo, then the 'Commits'
link, and there should be a 'Commit Graph' link next to the branch
name.
It should also be available at
[https://<gitea-server>/<org-name>/<repo-name>/graph]{.title-ref}.
Setting up branch protection was a little hard.
To do it, go the Settings for the repo, then click the 'Branch'
button, then click the 'Add New Rule' button.
I enabled the 'Require Signed Commits' and 'Enable Status Check'
checks.
The Status check pattern format is
'<name of .gitea/workflows workflow> / < job name> (<git trigger>)'.
For example, my workflow file looks like below, making the status
check 'On Push Workflows / On-Push-Job (push)'.
If you've already triggered the workflow, it should show up on the
page in a text box below the setting.
name:OnPush Workflowrun-name:${{ gitea.actor }} pushed out some changes. 🚀on:push:branches:- "*"jobs:On-Push-Job:<<clipped the rest of the workflow>>
The surprising
I ran into one pleasant surprise and two that weren't as pleasant.
Cloning from Gitea template repos worked really well, even working
on file names.
I had a file named test_${REPO_NAME_CAMEL}.py in the tests
directory of my template repo, and Gitea successfully created the
file with the correct name.
Make sure the .gitea/template file in the template repo includes
all the files that the templates should create. It wasn't enough
for me to add a tests directory .gitea/template. I had to add
tests/** to get Gitea to apply the templates to everything under
the tests directory.
Setting up Gitea to sign merge commits was a big hassle
I spent a lot of time trying to add the key to /home/git/.gnupg,
but Gitea expects the key to be at
/var/lib/gitea/home/data/.gnupg.
Thanks to Ivan's
post
for pointing out the correct directory.
I'll close with the biggest gotcha I've run into: The
actions/checkout@v4 steps used to checkout repo code during
.gitea/workflows was cloning the actions repo from GitHub.
The whole reason for spinning up a local forge was to be able to
keep working when GitHub was down, or when it enshittified
past the point of usefulness.
To avoid this GitHub dependency, I added the block below to
/etc/gitea/app.ini, then created an 'actions' organization and
pushed a mirror of https://github/actions/checkout.git into the
'actions' organization.
Once I did this, and restarted Gitea, I saw the workflows cloning
from https://<my-gitea-server>/actions/checkout instead of
https://github/actions/checkout.
This post covers an approach I've used for add pytest-style unit tests
to my Salt-related projects.
The content below may be specific to Salt, but the testing techniques
should work on any project using Jinja2.
Project Directory Structure
In my existing salt project repo, I added a tests subdirectory, a
setup.cfg with the contents below, and added a test target to the
project's Makefile.
I also installed the pytest and jinja2 eggs in a virtualenv in my
working directory.
Here's a snippet of the Makefile for kicking off the tests:
Note that since the tests are python code, you should run them through
whatever linters and style-checkers you're using on the rest of your
python code.
test:
py.test-3
pycoderstyle ./tests/*.py
And here's the setup.cfg:
Without the extra '--tb=native' argument, pytest would sometimes
through an internal error when jinja ended up throwing exception, as
we'll see later below.
[tool:pytest]python_files=test_*.py tests/__init__.py tests/*/__init__.py
#uncomment the line below for full unittest diffs
addopts =
# Any --tb option except native (including no --tb option) throws an internal pytest exception
# jinja exceptions are thrown
--tb=native
# Uncomment the next line for verbose output
# -vv[pycodestyle]max-line-length=999
ignore=E121,E123,E126,E226,E24,E704,E221,E127,E128,W503,E731,E131,E402
Note there is a test*.py file for each file that includes Jinja2
markup.
tests/conftest.py
The conftest.py contains the common fixtures used by the tests. I've
tried adding docstring comments to explain how to use the fixtures, but
also see the examples.
importpytestfromunittest.mockimportMockimportjinja2fromjinja2importEnvironment,FileSystemLoader,ChoiceLoader,DictLoader,StrictUndefinedclassRaiseException(Exception):""" Exception raised when using raise() in the mocked Jinja2 context"""pass@pytest.fixture()defmocked_templates(request):""" A dict of template names to template content.
Use this to mock out Jinja 'import "template" as tmpl" lines.
"""mocked_templates={}returnmocked_templates@pytest.fixture(autouse=True)defjinja_env(request,mocked_templates):""" Provide a Jinja2 environment for loading templates.
The ChoiceLoader will first check when mocking any 'import' style templates,
Then the FileSystemLoader will check the local file system for templates.
The DictLoader is first so the Jinja won't end up using the FileSystemLoader for
templates pulled in with an import statement which doesn't include the 'with context'
modifier.
Setting undefined to StrictUndefined throws exceptions when the templates use undefined variables.
"""test_loader=ChoiceLoader([DictLoader(mocked_templates),FileSystemLoader('.'),])env=Environment(loader=test_loader,undefined=StrictUndefined,extensions=['jinja2.ext.do','jinja2.ext.with_','jinja2.ext.loopcontrols'])returnenv@pytest.fixture(scope='session',autouse=True)defsalt_context():""" Provide a set of common mocked keys.
Currently this is only the 'raise' key for mocking out the raise() calls in the templates,
and an empty 'salt' dict for adding salt-specific mocks.
"""defmocked_raise(err):raiseRaiseException(err)context={'raise':mocked_raise,'salt':{}}returncontext
init.sls
For purposes of the sections below, here's what the init.sls looks
like:
#!jinja|yaml
# {%setversion=salt['pillar.get']('version','latest')%} # version: {{version}} # {%ifversion=='nope'%} # {{raise("OH NO YOU DIDN'T")}} # {%endif%}
Mock out the Jinja Context
Let's test out that rendering init.sls should return a version key with
some value.
Being able to mock out the salt pillar.get() function was a big
breakthrough with respect to being able to write any sort of unit tests
for the Salt states.
@pytest.fixturedefpoc_context(self,salt_context):""" Provide a proof-of-concept context for mocking out salt[function](args) calls """poc_context=salt_context.copy()defmocked_pillar_get(key,default):""" Mocked salt['pillar.get'] function """pillar_data={'version':'1234'}returnpillar_data.get(key,default)# This is the super sauce:# We can mock out the ``salt['function'](args)`` calls in the salt states by# defining a 'salt' dict in the context, who's keys are the functions, and the values of mocked functionspoc_context['salt']['pillar.get']=mocked_pillar_getreturnpoc_contextdeftest_jinja_template_poc(self,jinja_env,poc_context):""" Render a template and check it has the expected content """# This assumes the tests are run from the root of the project.# The conftest.py file is setting the jinja_env to look for files under the 'latest' directorytemplate=jinja_env.get_template('init.sls')# Return a string of the rendered template.result=template.render(poc_context)# Now we can run assertions on the returned rendered template.assert"version: 1234"inresult
Mocking a raise() error
Now, let's see how we can test triggering the raise() error based on
the pillar data:
@pytest.fixturedefbad_context(self,salt_context):""" Lets see what happens if the template triggers a raise() """# The base salt_context from conftest.py includes a 'raise' entry that raises a RaiseExceptionbad_context=salt_context.copy()bad_context['salt']['pillar.get']=lambdak,d:'nope'returnbad_contextdeftest_raise_poc(self,jinja_env,bad_context):""" Try rendering a template that should fail with some raise() exception """withpytest.raises(RaiseException)asexc_info:template=jinja_env.get_template('init.sls')result=template.render(bad_context)raised_exception=exc_info.valueassertstr(raised_exception)=="OH NO YOU DIDN'T"
Mocking imported templates
Sometimes the Jinja templates may try import other templates which are
either out of scope with respect to the current project, or the import
doesn't include the 'with context' modifier, so the Jinja context
isn't available when rendering the template.
In this case we can used with DictLoader portion of the jinja_env to
mock out importing the template.
In this example, lets assume the following template file exists in the
templates directory:
{%- import'missing.tmpl'asmissing -%} Can we mock out missing/out of scope imports ?
Mocked: {{missing.perhaps}} Macro Call: {{missing.lost('forever'}}
Now here is a test that can mock out the missing.tmpl contents,
including the lost() macro call:
deftest_missing_template(self,jinja_env,mocked_templates,salt_context):"""
In this example, templates/missing-import.tmpl tries to import a non-available 'missing.tmpl' template.
The ChoiceLoader checks DictLoader loader, which checks mocked_templates and finds a match
"""mocked_templates['missing.tmpl']="""
{% set perhaps="YES" %}
{% macro lost(input) %}MOCKED_LOST{% endmacro %}
"""missing_template=jinja_env.get_template('templates/missing-import.tmpl')missing_result=missing_template.render(salt_context)assert"Mocked: YES"inmissing_resultassert"Macro Call: MOCKED_LOST"inmissing_result
Testing out this macro is a little more involved, since first we have to
include calls to the macro after rendering the template. Note we're
reusing the poc_context fixture defined earlier so the pillar.get()
call is still mocked out to return 1234 for the version"
deftest_get_pillar_from_macro(self,jinja_env,poc_context):"""
If we want to reference the mocked context in the macros, we need
to render the source + macro call within a context.
"""# The '[0]' is because get source returns a (source,filename,up-to-date) tuple.template_source=jinja_env.loader.get_source(jinja_env,'macro.sls')[0]new_template=jinja_env.from_string(template_source+"{{ test_macro('hello') }}")result=new_template.render(poc_context)assert"macro sez hello"inresultassert"version sez 1234"inresult
It's also possible to check that the macro raises an error based on the
input:
deftest_raise_from_macro(self,jinja_env,salt_context):"""
In this test, try forcing a raise() from within a macro
"""withpytest.raises(RaiseException)asexc_info:template_source=jinja_env.loader.get_source(jinja_env,'macro.sls')[0]new_template=jinja_env.from_string(template_source+"{{ test_macro('nope') }}")result=new_template.render(salt_context)raised_exception=exc_info.valueassertstr(raised_exception)=="UNACCEPTABLE"
FECUNDITY: Checking for undefined variables during template rendering
Back in the day I learned that one of the virtues of a scientific theory
was 'fecundity', or the ability for the theory to predict new behavior
the original theory hadn't considered.
It looks like this may be called fruitfulness now, but still
whenever I stumble across something like this, I shout out 'FECUNDITY'
internally to myself. :shrug:
While I was working on this project, I noticed the jinja Environment
constructor has an
undefined
argument that defaulted to Undefined. I also noticed
StrictUndefined
was another value that the undefined argument could use.
It would be useful if the tests could throw exceptions when they ran
into undefined variables. This could happen from typos in the templates,
or possibly not mocking out all the globals variables used in a
template.
So I tried making an jinja Environment with undefined=StrictUndefined,
and wrote a test with template with a typo in a variable name to see if
the test would raise an exception, and it did !
This is much more useful than the default jinja behavior where Jinja
would give the typo a value of None and would likely surface in the
output as a empty string.
It's also more useful than setting undefined to
DebugUndefined,
which sometimes raised an exception, but sometimes left the un-modified
'{{ whatever }}' strings in the rendered templates. Bleh.
Here's the sample template I used, called unexpected_variable.sls.
It's the same as the original init.sls, but with a 'verion' typo:
#!jinja|yaml
# {%setversion=salt['pillar.get']('role:echo-polycom:version','latest')%} # version: {{version}} # {%ifverion=='nope'%} # {{raise("OH NO YOU DIDN'T")}} # {%endif%}
And let's try adding this test, which is the same as the earlier
test_jinja_template_poc() test, but with the buggy template:
deftest_unexpected_variable(self,jinja_env,poc_context):""" Render a template and check it has the expected content """# This assumes the tests are run from the root of the project.# The conftest.py file is setting the jinja_env to look for files under the 'latest' directorytemplate=jinja_env.get_template('unexpected_variable.sls')# Return a string of the rendered template.result=template.render(poc_context)# Now we can run assertions on the returned rendered template.assert"version: 1234"inresult
This test will fail with an undefined error exception below ! Cool. I
can fix the typo, and rerun the test to get it passing again ! FECUNDITY
!
====================================================FAILURES======================================================= _________________________________________ TestJinja.test_unexpected_variable __________________________________________
Traceback (most recent call last):
File "/my/working/dir/test_jinja_template_poc.py", line 150, in test_unexpected_variable
result= template.render(poc_context) File "/usr/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs) File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True) File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb) File "/usr/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb) File "unexpected_variable.sls", line 6, in top-level template code
# {% if verion == 'nope' %} jinja2.exceptions.UndefinedError: 'verion' is undefined=========================================1 failed, 5 passed in 0.89 seconds==========================================
Running the tests
The tests are kicked off via 'pytest' like any other python project
using pytest.
Every now and then someone will come along and spraypaint yellow
"TS"'s on the sidewalks around the neighorhood with arrows next to
them. The arrows they lead to little square metal covers with a hole in
the middle.
From 99%
Invisible,
I figured it had something to do with the gas line since it was yellow,
and that they was probably some sort of access point under the covers,
but I couldn't figure out why they were using 'TS' instead something
like 'GL' or 'GAS'.
Recently I found one of the TS's pointing to a more informative cover:
I'd like to add some unit tests and a Containerfile to an existing repo
of salt states, where running 'pytest' in the repo's workspace would
spin up the container and run the tests against it, and then tear down
the container.
The tests would run 'salt state.apply' commands against the container,
applying different sets of pillar data depending on the test.
Project Directory Structure
First let's set up a directory structure for the project that includes
the states, their tests, and any needed test data. In the case of salt
states, the test dtaa will be pillar files and files served by
ext_pillar. The directory structure below is what I ended up using:
Assuming all these files are stored in git, there's a .git directory
from when you cloned the repo
The 'env' directory is a python virtualenv under 'env', where the
testinfra egg has been installed. You can skip the virtualenv if you're
pulling in testinfra from a global package.
Containerfile is, well a Podman Containerfile, and setup.cfg contains
some pytest-specific settings.
The tests directory is where the testinfra test_*.py files are stored.
The tests/data/pillar directory will end up be mapped to the /srv/pillar
directory in the test container. Similarly tests/data/ext_pillar will
be mapped to /srv/ext_pillar.
The salt-states directory includes the *.sls and *.jinja files, and
any other salt-related subdirectories like 'templates', 'files',
'macros', etc. This directory will be mapped to /srv/salt/project in
the container.
Containerfile
The Containerfile I'm using for this project is below.
# I'm using Ubuntu 20.4 for this project-under-test so pull in the stock Ubuntu image for that versionFROM ubuntu:focal
RUN apt-get update
# The stock image doesn't include curl, so install it and bootstrap salt# Longterm, I would host the bootstrapping script internally in case that site disappeared.RUN apt-get install -y curl
RUN curl -L https://bootstrap.saltproject.io | sh -s --
# Configure salt run as a masterless minionRUN echo"file_client: local" > /etc/salt/minion.d/masterless.conf
RUN printf"local" > /etc/salt/minion_id
# Set up the /srv/salt environmentRUN mkdir -p /srv/salt
RUN mkdir -p /srv/ext_pillar/hosts/local/files
RUN printf"ext_pillar:\n - file_tree:\n root_dir: /srv/ext_pillar\n" >> /etc/salt/minion.d/masterless.conf
# Delay setting up /srv/salt/top.sls until the container starts. so PROJECT can be sent in as a ENVRUN printf"printf \"base:\\n '*':\\n - \${PROJECT}\\n\" > /srv/salt/top.sls" >> /root/.bashrc
# Create a local userRUN useradd local_user
# The Salt git states apparently assume git is already installed on the host, so install it.RUN apt-get install -y git
Building and verifying the saltmasterless:latest image
Using this Containerfile, I built a saltmasterless:latest image:
Then with this image, I can start a container that includes volumes
mapping the tests/data/pillar ro /srv/pillar, tests/data/ext_pillar to
/srv/ext_pillar, and test_project to /srv/salt:
The setup.cfg file is mostly used to tell pytest to ignore the salt
states directory:
tests/data/pillar/top.sls
As mentioned above the tests/data/pillar directory will be mapped to
/srv/pillar in the container, but let's look at the top.sls a little
closer. From the Containerfile, /etc/salt/minion_id was set to
'local', so normally the top.sls file will end up using
/srv/pillar/test_zero.sls for it's pillar data.
But lets say we want to run a test with some other pillar data. In that
case , in the test we'll use the salt-call '-id' argument to run the
command as a different minion id. So with the top.sls file below,
running 'salt-call --local -id=test_one state.apply' will use the
test_one.sls pillar data instead of test_zero.sls
The tests/test_project.py file includes a host fixture based on
https://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images.
Note that the podman_cmd is pretty much the same as the command used
above when testing the container. The cwd-related logic is because the
-v args required full path names.
# scope='session' uses the same container for all the tests;# scope='function' uses a new container per test function.@pytest.fixture(scope='session')defhost(request):cwd=os.getcwd()podman_cmd="podman run -d -it --env PROJECT=test_project -v ${PWD}/test_project:/srv/salt/test_project -v ${PWD}/tests/data/pillar:/srv/pillar -v ${PWD}/tests/data/ext_pillar:/srv/ext_pillar/hosts/local/files --name test_box --hostname local saltmasterless:latest bash"podman_cmd=podman_cmd.replace("${PWD}",cwd)podman_cmd_list=podman_cmd.split(' ')# run a containerpodman_id=subprocess.check_output(podman_cmd_list).decode().strip()# return a testinfra connection to the containeryieldtestinfra.get_host("podman://"+podman_id)# at the end of the test suite, destroy the containersubprocess.check_call(['podman','rm','-f',podman_id])
tests/test_project.py full salt run test
Here's a test that does a full salt state.apply on the container. This
test is slow, since the container starts are with just salt and git
installed, and the project-under-test is making a lot of changes. Note
theuse of the '--local' argument to tell salt to try to pull data
from a saltmaster.
deftest_full_salt_run(host):print('running salt-call state.apply. This will take a few minutes')cmd_output=host.run('salt-call --state-output=terse --local state.apply')print('cmd.stdout: '+cmd_output.stdout)assertcmd_output.rc==0assertcmd_output.stderr==''
tests/test_project.py alternative pillar data test
In this example, suppose ./test_project/map.jinja included a check like
below:
{%ifnotsalt['pillar.get']('mandatory_pillar_item')%}{{raise('mandatory_pillar_item is mandatory')}}{%endif%}
And then there's a 'missing_mandatory_pillar_item' in the
./test/data/pillar/top.sls as per above, and a
./test/data/pillar/missing_mandatory_pillar_item.sls file exists
that's missing the mandatory pillar item.
Then a test like below could force a salt run that uses this pillar data
by using the '--id' argument as per below, and an assertion could
check the error was raised.
deftest_missing_mandatory_pillar_itemn(host):print('running another salt-call state.apply with bad pillar data.')cmd_output=host.run('salt-call --state-output=terse --local --id=missing_mandatory_pillar_item state.apply')assert"mandatory_pillar_item is mandatory"incmd_output.stderrassertcmd_output.rc!=0
Running the tests
The tests are kicked off via 'pytest' like any other python project
using pytest.`
Recently, I ran some tests to see what would happen when my root CA cert
expired, and what I'd need to do to update the cert.
Spoiler alert: Updating the CA cert was not that hard...
First I created a CA that expired in 2 hours using the new-ca.py code
below:
fromOpenSSLimportcrypto#Following script will create a self signed root ca cert.fromOpenSSLimportcrypto,SSLfromos.pathimportjoinimportrandomCN='expired-ca-test'pubkey="%s.crt"%CN#replace %s with CNprivkey="%s.key"%CN# replcate %s with CNpubkey=join(".",pubkey)privkey=join(".",privkey)k=crypto.PKey()k.generate_key(crypto.TYPE_RSA,2048)# create a self-signed certcert=crypto.X509()cert.get_subject().CN=CNcert.set_serial_number(0)cert.gmtime_adj_notBefore(0)cert.gmtime_adj_notAfter(7200)# CA is only good for 2 hourscert.set_issuer(cert.get_subject())cert.set_subject(cert.get_subject())cert.set_version(2)xt=crypto.X509Extension(b'basicConstraints',1,b'CA:TRUE')cert.add_extensions((xt,))cert.set_pubkey(k)cert.sign(k,'sha512')pub=crypto.dump_certificate(crypto.FILETYPE_PEM,cert)priv=crypto.dump_privatekey(crypto.FILETYPE_PEM,k)open(pubkey,"wt").write(pub.decode("utf-8"))open(privkey,"wt").write(priv.decode("utf-8"))
This block is based on how the ancient certmaster program created its
CA.
Then I created a expired-ca-test.srl with contents "01"
THIS FAILED. I thought previously signed keys would continue to verify
against the expired CA but new certs wouldn't be created. Instead
previously signed certs won't validate against the expired CA.
Then I tried signing a new cert with the expired CA. Certainly this
fail, right ?
Now lets see what happens if we update the CA cert with the update-ca.py
script below.
This is almost the same as the new-ca.py script above, except the
original CA key is reused instead of generating a new key. Also the
CN and serial number need to be the same as the original expired CA
cert.
Verification will fail if the CN or serial number values are not the
same as the original CA, but unfortuanetly I didn't save the errors
from when I tried using 'updated-ca-test' as the CN, or when I tried
bumping up the serial number to 1.
fromOpenSSLimportcrypto#Following script will create a self signed root ca cert.fromOpenSSLimportcrypto,SSLfromos.pathimportjoinimportrandomCN='updated-ca-test'pubkey="%s.crt"%CN#replace %s with CNprivkey="%s.key"%CN# replcate %s with CNpubkey=join(".",pubkey)privkey=join(".",privkey)# Instead of creating a new key, use the old CA's key# nope: k = crypto.PKey()# nope: k.generate_key(crypto.TYPE_RSA, 2048)st_key=open('expired-ca-test.key','rt').read()k=crypto.load_privatekey(crypto.FILETYPE_PEM,st_key)# create a self-signed certcert=crypto.X509()cert.get_subject().CN='expired-ca-test'# keep the same CN as the old CA certcert.set_serial_number(0)# keep the same serial number as the old CA certcert.gmtime_adj_notBefore(0)cert.gmtime_adj_notAfter(63072000)# CA is only good for 2 yearscert.set_issuer(cert.get_subject())cert.set_subject(cert.get_subject())cert.set_version(2)xt=crypto.X509Extension(b'basicConstraints',1,b'CA:TRUE')cert.add_extensions((xt,))cert.set_pubkey(k)cert.sign(k,'sha512')pub=crypto.dump_certificate(crypto.FILETYPE_PEM,cert)priv=crypto.dump_privatekey(crypto.FILETYPE_PEM,k)open(pubkey,"wt").write(pub.decode("utf-8"))open(privkey,"wt").write(priv.decode("utf-8"))
Note that this code creates a updated-ca-test.key that's the same as
expired-ca-test.key, so I could have continued using expired-ca-test.key
in the cert creation below.
Now verify the old cert verifies using the new CA:
> openssl verify -verbose -CAfile updated-ca-test.crt pre-expired-example.crt
pre-expired-example.crt: OK
THIS WORKED. The updated CA could be used to verify both new and
previous created certs Hurray !!
Conclusion
An expired/expiring root CA may be a hassle, but it's not catastrophic.
The biggest pain should be pushingout the updated root CA everywhere the
cert is being used in your environment. If you're using an
orchestration/CM tool like Salt or Ansible, updating the root CA cert
shouldn't be too bad, but remember to reload or restart any services
using the cert to force the updated CA cert to read.
I found a workaround for the Duplicity fails to
start
issue where 'sudo deja-dup' would fail with the python stacktrace
mentioned in the launchpad ticket.
The ticket was not very useful, so I started looking at the various
files in the stacktrace and saw line from
/usr/lib/python3/dist-packages/duplicity/backends/giobackend.py was
within an "if u'DBUS_SESSION_BUS_ADDRESS' not in os.environ" block.
So I wondered what would happen if I let that environment variable pass
into the sudo environment. I tried 'sudo -E deja-dup' as per preserve
the environment. This didn't
result in a stacktrace, but it ended up running the backup as the normal
non-root user, probably because the preserved environment included the
USER and HOME variables along with the DBUS_SESSION_BUS_ADDRESS
variable.
Then I tried preserving just DBUS_SESSION_BUS_ADDRESS with
'sudo --preserve-env=DBUS_SESSION_BUS_ADDRESS deja-dup', it worked as
expected.
So the hint here is that when presented with a stacktrace don't be
afraid to "Use the Source, Jean Luc".
This post covers how I'm keeping my orgmode
notebooks synced between my desktop and phone. There are also some tips
on how I resolve merge conflicts in my notebooks and other hints for
using Orgzly.
Setup
Owncloud Server
Set up an owncloud server somewhere where both
my desktop and phone can reach it.
Desktop: Owncloud client and git repo
On the desktop, install the owncloud client and configured ~/owncloud
to be shared with the server. Also set up an ~/owncloud/org directory
and add an ~/org symlink that points to ~/owncloud/org. This is mostly
to make it easier to access the org files.
Update your emacs and/or vi configs to support orgmode. I'll update the
org files in the editor that's more convenient at the time, though
I'll use emacs if I need to update the deadline of a task.
I've also added .git to the owncloud and did a 'git init' in the
~/owncloud/org directory. This repo will be used in the 'Handling
Conflicts' section below.
Create or move your notebook files into the ~/owncloud/org directory
and make sure they have a '.org' extension. Verify that the org files
are being synced to the owncloud server.
Make sure your owncloud client is set to start when your desktop
reboots.
Phone: Owncloud client and Orgzly
Install the owncloud client on your phone.
Configure it to sync with the owncloud server and verify it's syncing
the notebook files from the server.
Also install and configure Orgzly. For the
syncing the repositories [use WebDAV]{.title-ref} with the URL
https://<your-owncloud-server>/remote.php/webdav/org. I originally
tried using local storage, but I couldn't get that to work.
Then for each of the notebooks, [set the link]{.title-ref} to
https://<your-owncloud-server>/remote.php/webdav/org. Update one of
your notebooks with Orgzly and verify the change is synced to your
desktop.
Day-to-day Usage
Throughout the day, I'm usually using Orgzly, checking the Agenda tab
for overdue and upcoming tasks, and adding new tasks and notes.
I'm access the Notebooks on the desktop less frequently, mostly
archiving completed tasks, and adding links and notes to research tasks.
I also tend to move tasks between notebooks from the desktop.
Handling Recurring Events
I ignore the scheduled time setting in Orgzly, and only set the
deadlines with warning times. Then I treat the notification that's
generated from warning time passing as the time I should start on a
task.
For repeating events, I use the '++' modifier for tasks. This way if I
miss a few iterations of a task it will add the next iteration in the
future.
I'm also try to set an appropriate warning period when setting tasks.
It took me a while to figure out I could tap on the date itself to bring
up a calender instead of being limited to the 'today', 'tomorrow',
and 'next week' options. <shrug>.
Handling Conflicts
Sometimes when I'm updating the notebooks on both my desktop and phone,
orgzly will say there's a conflict.
When this happens I go to my desktop and make a checkpoint commit of any
outstanding changes on the ~/owncloud/org repo. Then I push the
Notebook from Orgzly (the Cloud with the 'up' arrow.
Then on the desktop, I do a 'git diff' and adjust Notebook as needed.
Usually this includes adding some new notes or adjusting some deadlines.
I use GnuCash, but keeping my 401(k) accounts up to date has always been
a tedious, manual process.
I came up with the python snippet below to handle the import, but it was
a pain to write since I couldn't find any examples for setting up stock
or mutual fund transactions.
The
SetSharePriceAndAmount
method ended up being key, and I only found that by searching through
the gnucash source.
The GncNumeric class also ended up being more of a pain to use than I
expected. There's probably a better way to use it, but the 'multiple
values by 1000000/100000' approach is working for me now.
I'm using the stock GnuCash and python-gnucash version 2.6.19 available
in Ubuntu 18.04, so this stuck using python 2.7.
#!/usr/bin/python2.7importcsvfromdatetimeimportdatetimeimportgnucashsession=gnucash.Session("xml://yourfile.gnucash")book=session.bookroot_account=book.get_root_account()usd=book.get_table().lookup('ISO4217','USD')# There's probably a better way to use 'Your:Retirement:Contributions' instead ....contrib_acct=root_account.lookup_by_name("Your").lookup_by_name("Retirement").lookup_by_name("Contributions")parent_acct=root_account.lookup_by_name("401k")withopen('your_transactions.csv','rb')astrans_csv:trans_reader=csv.reader(trans_csv,delimiter=',')# Skip over the first row since it's headersheader=next(trans_reader)fordescription,date,fund_name,share_price_str,share_amount_str,amount_strintrans_reader:child_account=parent_acct.lookup_by_name(fund_name)posting_date=datetime.strptime(date,"%m/%d/%y")tx=gnucash.Transaction(book)tx.BeginEdit()tx.SetCurrency(usd)tx.SetDatePostedTS(posting_date)tx.SetDescription(description)sp1=gnucash.Split(book)sp1.SetParent(tx)sp1.SetAccount(child_account)# GncNumeric(n,d) represents numbers as fractions of the form n/d, so GncNumeric(1234567/1000000) = 1.234567# There's probably a better way to do this...share_price=gnucash.GncNumeric(float(share_price_str)*(10**6),10**6)share_amount=gnucash.GncNumeric(float(share_amount_str)*(10**6),10**6)# share_price * share_amount == amount, so I could have used that instead using the value from the csvamount=gnucash.GncNumeric(float(amount_str)*(10**6),10**6)# ( ˘▽˘)っ♨ This is the secret sauce for setting the number of shares and the price.sp1.SetSharePriceAndAmount(share_price,share_amount)sp2=gnucash.Split(book)sp2.SetParent(tx)sp2.SetAccount(contrib_acct)sp2.SetValue(amount.neg())tx.CommitEdit()session.save()session.end()
Special thanks to this
post
for providing most of the code above.
I'm at the point with a Elixir/Phoneix side project that I'm thinking
about deployment.
The first big stumbling block was that development environment (Ubuntu
16.04) wasn't the same as my deployment environment (Centos 7). For
Ubuntu, I could pull the latest Elixir packages from Erlang
Solutions,
but they don't host Centos Elixir packages, and the version of Elixir
on EPEL is over 4 years
old - old
enough that 'mix deps' on my project was erroring out.
I found the posts below about installing Elixir on Centos 7 but, they
involve cloning the Elixir repo and building it from source, but I
don't want development tools like git and gcc on my production
machines.
Maybe one of those links works for you, but what I
want is a way to build a
more recent version of the Elixir rpm than what was available in EPEL.
That way I can recompile and package Elixir on a development machine,
and then only copy the rpm up to my production machine.
It looks like Elixir in EPEL is waiting for the Erlang 18 release
to get promoted in EPEL, so maybe I can take the existing Elixir
packaging source and build it against the latest Erlang packages from
Erlang Solutions.
I found the packaging source at
https://src.fedoraproject.org/rpms/elixir and after poking around a
bit I came up with Vagrantfile below. It seems to be working OK so far.
: