<?xml version='1.0' encoding='UTF-8'?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
  <channel>
    <title>nerdsniped.se</title>
    <link>https://nerdsniped.se</link>
    <description>Personal website of Johan Saf</description>
    <category>Weblog</category>
    <copyright>2024– Johan Saf</copyright>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <generator>Python and neovim</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 15 Nov 2025 12:12:31 +0000</lastBuildDate>
    <pubDate>Tue, 03 Jun 2025 12:27:09 +0200</pubDate>
    <ttl>1440</ttl>
    <dc:creator>Johan Saf</dc:creator>
    <item>
      <title>Let's Encrypt Wildcard Certs and deSEC</title>
      <link>https://nerdsniped.se/posts/lets-encrypt-wildcard-certs-with-desec/</link>
      <description><![CDATA[<p>I use <a href="https://desec.io">deSEC</a> for DNS hosting (which I found via <a href="https://jpmens.net/2025/03/04/a-look-at-domain-hosting-with-desec/">JP Mens</a>), and it's possible to use it for generating wildcard TLS certs via Let's Encrypt.</p>
<p>This is being done on FreeBSD 14.2-RELEASE-p3.</p>
<p>First I had to install <code>certbot</code>, <code>pip</code> and the appropriate Python module:</p>
<pre class="command-line" data-user="root" data-host="server"><code class="language-bash">pkg install -y py311-certbot py311-pip
pip install certbot-dns-desec</code></pre><p><code>pip</code> will warn against installing system-wide packages and I would generally agree, but this time I will go against the warning. At some point I will look into making my own custom port for the deSEC module, but today is not that day.</p>
<p>Next step is to log in to deSEC and create a new token. No extra permissions are required. Create a place for storing the token:</p>
<pre class="command-line" data-user="root" data-host="server"><code class="language-bash">mkdir /usr/local/etc/letsencrypt/secrets
chmod 600 /usr/local/etc/letsencrypt/secrets</code></pre><p>Save the token in <code>/usr/local/etc/letsencrypt/secrets/domain.tld.ini</code>:</p>
<pre><code class="language-ini">dns_desec_token = &lt;token&gt;
</code></pre>
<p>And fix permissions:</p>
<pre class="command-line" data-user="root" data-host="server"><code class="language-bash">chmod 600 /usr/local/etc/letsencrypt/secrets/domain.tld.ini</code></pre><p>Now try and request a certificate:</p>
<pre class="command-line" data-user="root" data-host="server"><code class="language-bash">certbot certonly --authenticator dns-desec --dns-desec-credentials /usr/local/etc/letsencrypt/secrets/domain.tld.ini -d "domain.tld" -d "*.domain.tld"</code></pre><p>If you check in the deSEC interface at the same time you'll see new <code>_acme-challenge</code> records have been published.</p>
<p>The first time I did this I got an error about the new records not existing, but after re-trying the command again the certificate was created. I don't have an explanation for this, but it felt like the Let's Encrypt servers couldn't find the records the first time around.</p>
<p>Finally, to renew the certificate automatically this can be added to <code>/etc/periodic.conf</code>:</p>
<pre><code class="language-ini">weekly_certbot_enable=&quot;YES&quot;
weekly_certbot_post_hook=&quot;service nginx restart&quot;     # if you&#x27;re running nginx
# or
weekly_certbot_post_hook=&quot;service apache24 restart&quot;  # if you&#x27;re running apache
</code></pre>
<p>See <code>/usr/local/etc/periodic/weekly/500.certbot-3.11</code> for some more configuration options.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">ee189bf3e0790d46aece53855f691098</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Dynamic Ansible Inventory with Netbox</title>
      <link>https://nerdsniped.se/posts/dynamic-ansible-inventory-with-netbox/</link>
      <description><![CDATA[<p>I've recently started using the <a href="https://docs.ansible.com/ansible/latest/collections/netbox/netbox/nb_inventory_inventory.html">netbox.netbox.nb_inventory</a> for dynamically creating Ansible inventories.</p>
<p>This will for instance fetch all active devices which has the tag <code>network_edge</code>:</p>
<pre><code class="language-yaml">plugin: netbox.netbox.nb_inventory
api_endpoint: https://netbox.example.net
token: token_from_netbox

query_filters:
  - status: active
  - tag: network_edge
</code></pre>
<p>Let's check what data we get back (very much truncated):</p>
<pre class="command-line" data-user="user" data-host="host"><code class="language-bash">$ ansible-inventory -v --list -i inventory/netbox.yml</code></pre><pre><code class="language-json">{
    &quot;_meta&quot;: {
        &quot;hostvars&quot;: {
            &quot;router1&quot;: {
                &quot;ansible_host&quot;: &quot;2001:db8::1&quot;,
                &quot;platforms&quot;: [
                    &quot;cisco-ios-xr&quot;
                ],
                &quot;primary_ip4&quot;: &quot;192.0.2.1&quot;,
                &quot;primary_ip6&quot;: &quot;2001:db8::1&quot;,
                &quot;status&quot;: {
                    &quot;label&quot;: &quot;Active&quot;,
                    &quot;value&quot;: &quot;active&quot;
                },
                &quot;tags&quot;: [
                    &quot;network_edge&quot;
                ]
            },
            &quot;router2&quot;: {
                &quot;ansible_host&quot;: &quot;2001:db8::2&quot;,
                &quot;platforms&quot;: [
                    &quot;arista-eos&quot;
                ],
                &quot;primary_ip4&quot;: &quot;192.0.2.2&quot;,
                &quot;primary_ip6&quot;: &quot;2001:db8::2&quot;,
                &quot;status&quot;: {
                    &quot;label&quot;: &quot;Active&quot;,
                    &quot;value&quot;: &quot;active&quot;
                },
                &quot;tags&quot;: [
                    &quot;network_edge&quot;
                ]
            },
        }
    },
    &quot;all&quot;: {
        &quot;children&quot;: [
            &quot;ungrouped&quot;
        ]
    },
    &quot;ungrouped&quot;: {
        &quot;hosts&quot;: [
            &quot;router1&quot;,
            &quot;router2&quot;
        ]
    }
}
</code></pre>
<p>Unfortunately we can't pass this inventory to Ansible directly, since Ansible needs the <code>ansible_network_os</code> and <code>ansible_connection</code> variables.</p>
<p>The method I ended up with takes the platform from the output above, groups the devices together and then uses <code>group_vars</code> to set the connection settings.</p>
<h2>Grouping the devices</h2>
<p>We can use the <code>keyed_groups</code> option in the plugin to group devices together, based on something (in this case the platform). Something like this:</p>
<pre><code class="language-yaml">keyed_groups:
  - key: platform.slug
    prefix: platform
</code></pre>
<p>This would give the following output:</p>
<pre><code class="language-json">{
    &quot;_meta&quot;: {
        &quot;hostvars&quot;: {
            &quot;router1&quot;: {
                &quot;ansible_host&quot;: &quot;2001:db8::1&quot;,
                &quot;platforms&quot;: [
                    &quot;cisco-ios-xr&quot;
                ],
                &quot;primary_ip4&quot;: &quot;192.0.2.1&quot;,
                &quot;primary_ip6&quot;: &quot;2001:db8::1&quot;,
                &quot;status&quot;: {
                    &quot;label&quot;: &quot;Active&quot;,
                    &quot;value&quot;: &quot;active&quot;
                },
                &quot;tags&quot;: [
                    &quot;network_edge&quot;
                ]
            },
            &quot;router2&quot;: {
                &quot;ansible_host&quot;: &quot;2001:db8::2&quot;,
                &quot;platforms&quot;: [
                    &quot;arista-eos&quot;
                ],
                &quot;primary_ip4&quot;: &quot;192.0.2.2&quot;,
                &quot;primary_ip6&quot;: &quot;2001:db8::2&quot;,
                &quot;status&quot;: {
                    &quot;label&quot;: &quot;Active&quot;,
                    &quot;value&quot;: &quot;active&quot;
                },
                &quot;tags&quot;: [
                    &quot;network_edge&quot;
            }
        }
    },
    &quot;all&quot;: {
        &quot;children&quot;: [
            &quot;ungrouped&quot;,
            &quot;platform_arista_eos&quot;,
            &quot;platform_cisco_ios_xr&quot;
        ]
    },
    &quot;platform_arista_eos&quot;: {
        &quot;hosts&quot;: [
            &quot;router2&quot;
        ]
    },
    &quot;platform_cisco_ios_xr&quot;: {
        &quot;hosts&quot;: [
            &quot;router1&quot;
        ]
    }
}

</code></pre>
<h2>Ansible group variables</h2>
<p>There's already documentation available in the <a href="https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-many-machines-group-variables">Ansible documentation</a> about group variables, what they are and how they work.</p>
<p>But for our purpose we can use the groups <code>platform_arista_eos</code> and <code>platform_cisco_ios_xr</code> coming from the inventory to set connection settings, per platform.</p>
<p>Let's create the directories <code>inventory/group_vars/platform_arista_eos</code> and <code>inventory/group_vars/platform_cisco_ios_xr</code>, then create the file <code>settings.yml</code> inside both of them. I decided to put the following content into the files:</p>
<p>Arista:</p>
<pre><code class="language-ini">ansible_network_os: arista.eos.eos
ansible_connection: ansible.netcommon.httpapi
proxy_env:
  http_proxy: http://proxy.example.com:8080
</code></pre>
<p>IOS-XR:</p>
<pre><code class="language-ini">ansible_network_os: cisco.iosxr.iosxr
ansible_connection: ansible.netcommon.network_cli
ansible_iosxr_commit_comment: Committed by Ansible
</code></pre>
<p>Settings for the supported platforms can be found <a href="https://docs.ansible.com/ansible/latest/network/user_guide/platform_index.html">here</a>.</p>
<h2>Final execution</h2>
<p>Let's run the <code>ansible-inventory</code> command one last time and check the output:</p>
<pre><code class="language-json">{
    &quot;_meta&quot;: {
        &quot;hostvars&quot;: {
            &quot;router1&quot;: {
                &quot;ansible_connection&quot;: &quot;ansible.netcommon.network_cli&quot;,
                &quot;ansible_host&quot;: &quot;2001:db8::1&quot;,
                &quot;ansible_iosxr_commit_comment&quot;: &quot;Committed by Ansible&quot;,
                &quot;ansible_network_os&quot;: &quot;cisco.iosxr.iosxr&quot;,
                &quot;platforms&quot;: [
                    &quot;cisco-ios-xr&quot;
                ],
                &quot;primary_ip4&quot;: &quot;192.0.2.1&quot;,
                &quot;primary_ip6&quot;: &quot;2001:db8::1&quot;,
                &quot;status&quot;: {
                    &quot;label&quot;: &quot;Active&quot;,
                    &quot;value&quot;: &quot;active&quot;
                },
            },
            &quot;router2&quot;: {
                &quot;ansible_connection&quot;: &quot;ansible.netcommon.httpapi&quot;,
                &quot;ansible_host&quot;: &quot;2001:db8::2&quot;,
                &quot;ansible_network_os&quot;: &quot;arista.eos.eos&quot;,
                &quot;platforms&quot;: [
                    &quot;arista-eos&quot;
                ],
                &quot;primary_ip4&quot;: &quot;192.0.2.2&quot;,
                &quot;primary_ip6&quot;: &quot;2001:db8::2&quot;,
                &quot;proxy_env&quot;: {
                    &quot;http_proxy&quot;: &quot;http://proxy.example.com:8080&quot;
                },
                &quot;status&quot;: {
                    &quot;label&quot;: &quot;Active&quot;,
                    &quot;value&quot;: &quot;active&quot;
                },
                &quot;tags&quot;: [
                    &quot;network_edge&quot;
                ]
            }
        }
    }
}
</code></pre>
<p>Now the correct connection variables will be set for the devices, and the inventory can be passed to for instance <code>ansible-playbook</code> using the <code>-i</code> flag.</p>
<p>New platforms can be added easily by creating a new directory and a settings file.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">fdbbc86fb830784925a5ff048c09a119</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Custom AWX Execution Environment</title>
      <link>https://nerdsniped.se/posts/custom-awx-execution-environment/</link>
      <description><![CDATA[<p>From what I can tell a <a href="https://docs.ansible.com/automation-controller/latest/html/userguide/execution_environments.html">Execution Environment</a> in the AWX context is a container which gets created into a kubernetes pod whenever a playbook executes.</p>
<p>AWX can use several of these environment containers, so depending on needs custom containers can be created that contain different Python packages, Ansible collections and so on.</p>
<p>I wanted to create a custom container so ensure a newer version of Ansible was available, <code>ansible-pylibssh</code> was installed and the collections I used was already installed when a playbook was started.</p>
<p>To do this some files are required, and the <a href="https://ansible-builder.readthedocs.io/">ansible-builder</a> tool will be used to create the container.</p>
<h2>The files</h2>
<h3>execution-environment.yml</h3>
<p><a href="https://ansible.readthedocs.io/projects/builder/en/latest/definition/">This file</a> defines the environment:</p>
<pre><code class="language-yaml">---
version: 3
images:
  base_image:
    name: quay.io/centos/centos:stream10
dependencies:
  ansible_core:
    package_pip: ansible-core
  ansible_runner:
    package_pip: ansible-runner
  galaxy: requirements.yml
  system: bindep.txt
  python: requirements.txt
additional_build_steps:
  append_base:
    - RUN $PYCMD -m pip install -U pip
  append_final:
    - RUN git lfs install --system
</code></pre>
<p>This will install the latest available version of <code>ansible-core</code> and <code>ansible-runner</code>, look for other requirements in the files specified and execute some commands.</p>
<h3>bindep.txt</h3>
<p>Contains the software that should be installed in the environment, from the OS package manager.</p>
<pre><code class="language-plain">git-core [platform:rpm]
git-lfs [platform:rpm]
epel-release [platform:rpm]
</code></pre>
<h3>requirements.yml</h3>
<p>A list of Ansible collections to be installed from Galaxy.</p>
<pre><code class="language-yaml">---
collections:
  - name: ansible.netcommon
  - name: ansible.posix
  - name: ansible.utils
  - name: awx.awx
  - name: cisco.iosxr
  - name: containers.podman
  - name: netbox.netbox
</code></pre>
<h3>requirements.txt</h3>
<p>Python modules to be install via <code>pip</code>.</p>
<pre><code class="language-plain">ansible-pylibssh
</code></pre>
<h2>Create the environment</h2>
<p>Once the files are created <code>ansible-builder</code> can be used to create the container:</p>
<pre class="command-line" data-user="user" data-host="host"><code class="language-bash">ansible-builder build -v3 -t custom-environment</code></pre><p>Depending on the number of dependencies this will take a while but once it's done you should have a container which can be tagged and used. I push it to a local container registry:</p>
<pre class="command-line" data-user="user" data-host="host"><code class="language-bash">docker tag custom-environment ee:latest
docker push ee:latest</code></pre><h2>Using the environment</h2>
<p>If your container registry requires a credential you have to create a &quot;Container Registry&quot; credential.</p>
<p>To add the environment to AWX, log in as an admin user and to go Administration and Execution Environments. Click the add button and fill in the fields as necessary.</p>
<p>Then ensure you pick the correct execution environment in your job templates.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">c86e40d5fb67ac9139d5e1de8edd4df1</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>AWX and Jumphost Configuration</title>
      <link>https://nerdsniped.se/posts/awx-and-jumphost-configuration/</link>
      <description><![CDATA[<p>This is heavily based on <a href="https://github.com/ansible/awx/issues/9893#issuecomment-923197922">this Github issue comment</a> but with some changes.</p>
<p>AWX is required to run on a Kubernetes cluster, like k3s. Each execution of a playbook takes place inside of a pod which gets created and then destroyed once the playbook is done.</p>
<p>To ensure AWX can connect via a ssh jumphost we need to create a <code>ConfigMap</code> containing the relevant ssh configuration and a <code>Secret</code> containing the secret part of the ssh key used.</p>
<pre><code class="language-yaml">---
kind: ConfigMap
apiVersion: v1
metadata:
  name: awx-ssh-config
  namespace: awx
data:
  default: |
    Host * !jumphost
      UserKnownHostsFile /dev/null
      StrictHostKeyChecking no
      HostKeyAlgorithms=+ssh-rsa
      KexAlgorithms=+ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group1-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
      Ciphers=+aes128-cbc
    Host jumphost
      Hostname jumphost.example.net
      User awx
      UserKnownHostsFile /dev/null
      StrictHostKeyChecking no
      IdentityFile /runner/.ssh/id_ed25519
---
kind: Secret
apiVersion: v1
metadata:
  name: awx-ssh-key
  namespace: awx
type: Opaque
data:
  default: &lt;base64 encoded data of id_ed25519&gt;

</code></pre>
<p>Apply the changes (<code>kubectl apply -f .</code>) and log in to AWX as an admin user.</p>
<p>Go to Administration, Instance Groups and modify the default group:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  namespace: awx
spec:
  serviceAccountName: default
  automountServiceAccountToken: false
  containers:
    - image: quay.io/ansible/awx-ee:latest
      name: worker
      args:
        - ansible-runner
        - worker
        - &#x27;--private-data-dir=/runner&#x27;
      resources:
        requests:
          cpu: 250m
          memory: 100Mi
      volumeMounts:
        - name: ssh-config
          mountPath: /runner/.ssh/config
          subPath: default
        - name: ssh-key
          mountPath: /runner/.ssh/id_ed25519
          subPath: default
      securityContext:
        runAsUser: 1000
        runAsGroup: 0
  volumes:
    - name: ssh-config
      configMap:
        name: awx-ssh-config
        defaultMode: 0400
    - name: ssh-key
      secret:
        secretName: awx-ssh-key
        defaultMode: 0400
  securityContext:
    runAsUser: 1000
    runAsGroup: 0
    fsGroup: 0
</code></pre>
<p>Kubernetes is not my area of expertise but the <code>ConfigMap</code> and <code>Secret</code> gets mounted as files into the pod filesystem. The <code>securityContext</code> ensures the user inside of the pod can read the files, not sure how but it works.</p>
<p>Important to note is that the default execution environment (at least when I'm writing this) doesn't contain <code>ansible-pylibssh</code> so as noted in my [last post]({{% ref &quot;posts/connect-to-network-devices-through-a-jumphost-with-ansible/&quot; %}}) the relevant ssh settings might not work anyway.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">a60e59a926ca51474472fd6fc0cc723e</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Connect to Network Devices Through a Jumphost with Ansible</title>
      <link>https://nerdsniped.se/posts/connect-to-network-devices-through-a-jumphost-with-ansible/</link>
      <description><![CDATA[<p>I have some network devices (routers) and I want to run playbooks on them. Due to good practices all connections to the routers are limited to certain IP addresses, so I'm required to jump the ssh connection via an intermediate server; a jumphost.</p>
<p>In theory this is should be pretty easy but I spent so much time getting this to work. There's plenty of information about this subject out there but for some reason I couldn't get it to work. Hopefully my findings can help someone else, because this was not a fun experience.</p>
<p>This is my playbook:</p>
<pre><code class="language-yaml">---
- name: Jumphost testing
  hosts: all
  gather_facts: false

  tasks:
    - name: Gather facts
      cisco.iosxr.iosxr_facts:
      register: device_output

    - name: Print facts
      ansible.builtin.debug:
        msg:
          - &quot;{{ device_output }}&quot;
</code></pre>
<p>I execute it with this command:</p>
<pre><code>ansible-playbook -i inventory/hosts.yml -u user -k playbooks/jumphost_test.yml
</code></pre>
<p>After I enter my password for the routers the playbook will fail with a <code>connection refused</code> error message. This is because Ansible is trying to connect directly, without jumping through a jumphost.</p>
<p>Ideally I want Ansible to use the settings in my <code>~/.ssh/config</code> file automatically (like described <a href="https://www.jeffgeerling.com/blog/2022/using-ansible-playbook-ssh-bastion-jump-host">here</a>) but no matter what I couldn't get that to work.</p>
<p>I suspect using the <code>ansible.netcommon.network_cli</code> connection setting messes a bit with how the ssh settings are read, but I have no source for that. In the end I had to combine the two methods.</p>
<p>In <code>ansible.cfg</code> I had to make the following modification:</p>
<pre><code class="language-ini">[ssh_connection]
ssh_common_args=&quot;-o ProxyCommand=&#x27;ssh -W %h:%p -q jumphost&#x27;&quot;
</code></pre>
<p>And in <code>~/.ssh/config</code>:</p>
<pre><code class="language-ssh-config">Host * !jumphost
  HostKeyAlgorithms=+ssh-rsa
  KexAlgorithms=+diffie-hellman-group1-sha1,diffie-hellman-group-exchange-sha1
  Ciphers=+aes128-cbc

Host jumphost
  Hostname jumphost.example.net
  User user
  IdentityFile ~/.ssh/id_ed25519
</code></pre>
<p>Some caveats I found along the way:</p>
<ul>
<li>I couldn't specify another ssh_config using <code>-F</code>.</li>
<li>I couldn't specify a jumphost using <code>-J</code>, instead of using <code>ProxyCommand</code>.</li>
<li>Specifying a <code>User</code> under <code>Host *</code> doesn't work, not sure what's happning but I have to use <code>-u</code> to <code>ansible-playbook</code>.</li>
<li>Can't set <code>ProxyJump</code> under <code>Host *</code>, nothing seems to happen when I try to connect.</li>
</ul>
<p>With these changes I'm now able to execute the playbook, and it will happily use the jumphost specified.</p>
<p><em>Update 2025-03-06</em></p>
<p>I did another test after committing this post.</p>
<p>My <code>ansible.cfg</code> looks like this:</p>
<pre><code class="language-ini">[defaults]
transport=ssh
</code></pre>
<p>From what I can tell this means that Ansible should use <code>ansible-pylibssh</code> (which is backed by libssh) for ssh connections and if that Python package isn't available Ansible will fallback to Paramiko.</p>
<p>When I set <code>transport=paramiko</code> in the config Ansible didn't connect via my jumphost, so make sure <code>ansible-pylibssh</code> is installed.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">c3f62f984072f7947d65f6c22023a76f</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Notes on UTM</title>
      <link>https://nerdsniped.se/posts/notes-on-utm/</link>
      <description><![CDATA[<p><a href="https://getutm.app">UTM</a> seems to be the best and easiest way to run virtual machines on MacOS (and iOS...?), and while it works great for my simple use-cases I had some trouble getting there.</p>
<ul>
<li>
<p>In order for the VM to receive a IPv4 address from UTMs internal DHCP server I had to change the network setting from <code>Shared Network</code> to <code>Emulated VLAN</code>.</p>
<ul>
<li>This option doesn't exist when using Apple Virtualization.</li>
<li>The VM does receive a IPv6 address, but no default route. Not sure if there's any internal NAT64 magic going on, but if the rest of the network isn't IPv6 capable it's just easier to change the network mode.</li>
</ul>
</li>
<li>
<p>There's no option for port forwarding when using Apple Virtualization.</p>
</li>
<li>
<p>When creating a port forward the order of the fields are guest address, guest port, host address, host port.</p>
<ul>
<li>To create a local forward for SSH the guest address can be left empty, guest port is 22, host address set to 127.0.0.1 and host port to 2222. SSH to 127.0.0.1:2222 and you're connected to the VM.</li>
</ul>
</li>
<li>
<p>When sharing a host folder with the VM it can be mounted using <code>mount -t 9p -o trans=virtio share /mnt/</code>.</p>
<ul>
<li>
Or in <code>/etc/fstab</code>:<ul>
<li><code>share           /opt/dev        9p      trans=virtio    0       0</code></li>
</ul>
</li>
<li><code>share</code> is a mount tag defined by UTM.</li>
</ul>
</li>
</ul>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">472d8448e1372c0395bac2243b895ff8</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Default sorting in Thunderbird</title>
      <link>https://nerdsniped.se/posts/default-sorting-in-thunderbird/</link>
      <description><![CDATA[<p>By default Thunderbird sorts e-mails in a descending order (newest at the top), I want to do the opposite. It's possible to do this folder-by-folder but I want this to be the default behavour.</p>
<p>I couldn't get this to work with a already existing installation of Thunderbird so I had to remove all settings by removing (on MacOS) <code>~/Library/Thunderbird</code>.</p>
<p>Go to <code>Settings</code> and scroll down to <code>Config Editor</code>. Search for <code>mailnews.default</code>.</p>
<p>Ensure these settings are set like this:</p>
<pre><code class="language-plain">mailnews.default_sort_order -&gt; 1
mailnews.default_sort_type  -&gt; 18
mailnews.default_view_flags -&gt; 1
</code></pre>
<p>Close the config editor and add an account. Don't close Thunderbird in between since any changed settings will be reverted.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">1c7da21c57a61739bb20aff028d0c049</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Ansible: Reset SSH Connection</title>
      <link>https://nerdsniped.se/posts/ansible-reset-ssh-connection/</link>
      <description><![CDATA[<p>Sometimes when performing actions on a server, you might have to log out and log in again to have the changes applied. An example would be when adding a user to a group (to, for instance, allow <code>sudo</code> usage).</p>
<p>If you're using Ansible and want to achieve this, here's a way to do it:</p>
<pre><code class="language-yaml">- name: Reset SSH connection
  ansible.builtin.meta:
    reset_connection
</code></pre>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">1bb1a741ad89266775631f8ab5529d41</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Health Checks in ExaBGP</title>
      <link>https://nerdsniped.se/posts/health-checks-in-exabgp/</link>
      <description><![CDATA[<p>While lurking around on the Internet I found a link to <a href="https://yetiops.net/posts/anycast-bgp/">this blog post</a> which talks about DNS Anycast using BGP. Since I'm operating the DNS servers at work this piqued my interest, I'm always interested in reading about how others find solutions.</p>
<p>The post explains the concept of anycast, server configuration and how to setup BGP for this so I don't really have anything to add.</p>
<p>I do however wanted to expand on the ExaBGP configuration. What Mr Yeti does is querying the local DNS resolver for <code>yetiops.net</code> and checks the result. If the exit code is anything but 0 (meaning some kind of error happened) then the BGP route for the DNS resolver will be withdrawn from the network. Otherwise, announce the route and receive traffic from users.</p>
<p>This isn't a bad thing to do, but we're putting a lot of trust into the servers that host the zone being checked. If those servers are experiencing issues and the zone being checked isn't resolving the routes will be withdrawn, cutting off DNS for all the users even though there's no real problem with the resolver.</p>
<p>One option could be to check some other zone, but as we saw with Facebook <a href="https://en.wikipedia.org/wiki/2021_Facebook_outage">back in 2021</a> the big giants can also fail, and the way I see it it's really just a matter of time. Make sure to read the link, it describes the problem quite well.</p>
<p>The option I went with instead is to always announce the IP address of the DNS resolver, but in case of a failure increase the <a href="https://www.catchpoint.com/bgp-monitoring/bgp-med">MED</a> to move the traffic away from the server. The server will still accept traffic (and throw it away in case of an actual error) but the network will choose not to send any.</p>
<p>What this means is that if a server encounters a local error (for instance crashing software) it will be taken out of rotation, but if there's a major fault with the zone being checked all servers will increase their MED and nothing will really change for the users.</p>
<p>Of course, if there's both a local fault <strong>and</strong> a fault upstream the server in question will cause issues for the users but hopefully this is something that can be caught using monitoring.</p>
<p>This is the ExaBGP configuration I'm using, there's more processes for secondary IPv4 addresses and IPv6 but in the end all checks copies from this:</p>
<pre><code class="language-conf">process resolver_v4-check {
        run /usr/local/bin/python3.9 -m exabgp healthcheck \
                --cmd &quot;/usr/local/bin/dig +timeout=1 +tries=1 -4 -t SOA google.com @192.0.2.53&quot; \
                --interval 1 \
                --rise 30 \
                --fall 3 \
                --disable /usr/local/etc/exabgp/disable_resolver \
                --ip 192.0.2.53 \
                --next-hop 198.51.100.2 \
                --up-metric 0 \
                --down-metric 10000 \
                --disabled-metric 10000 \
                --up-execute &quot;logger DNS Resolver 192.0.2.53 going into state UP&quot; \
                --down-execute &quot;logger DNS Resolver 192.0.2.53 going into state DOWN&quot; \
                --disabled-execute &quot;logger DNS Resolver 192.0.2.53 going into state DISABLED&quot;;
        encoder text;
}

neighbor 198.51.100.1 {
        router-id 198.51.100.2;
        local-address 198.51.100.2;
        local-as 65053;
        peer-as 64496;
        hold-time 31;

        family {
                ipv4 unicast;
        }

        api anycast_v4 {
                processes [ resolver_v4-check ];
        }
}
</code></pre>
<p><code>192.0.2.53</code> is the local IP address bound to the loopback interface. There's a check every second and the <code>--fall</code> parameter tells ExaBGP to increase the MED after three failed checks. Once check per second might be to exaggerate a bit but I want to react quickly on faults and the result from the upstream DNS server is cached locally so I don't feel bad for putting any extra load on them.</p>
<p>The <code>--rise</code> parameter instead tells ExaBGP to only announce the route after 30 seconds of successful checks (meaning 30 checks). I choose this number to ensure there's no intermittent failures, causing traffic flapping back and forth.</p>
<p>I've also added the <code>--disable</code> parameter. If the file specified exists then ExaBGP will set the <code>--disabled-metric</code>. I <code>touch</code> the file before any maintenance on the server, like upgrades and reboots, and remove it afterwards. This will drain the server of queries and the users will not experience any loss of service.</p>
<p>Other than that there's some logging which will send events to the local syslog daemon which sends if off to a remote syslog server.</p>
<p>This configuration serves me well and it's been running in production for a few years. Maybe someone else will find it useful.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">9b9e984b29c75405736be184272106b5</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
    <item>
      <title>Python Script Not Showing Output in CI/CD</title>
      <link>https://nerdsniped.se/posts/python-script-not-showing-output-in-cicd/</link>
      <description><![CDATA[<p>I'm running a Python script in my Gitlab CI/CD pipeline, and it basically looks like this:</p>
<pre><code class="language-python">if __name__ == &quot;__main__&quot;:
    errors = do_thing(&quot;file.yml&quot;)
    if errors:
        print(&quot;The following errors were found:&quot;)
        for error in errors:
            print(f&quot;- {error}&quot;)
        os._exit(1)
    else:
        print(&quot;File is looking good!&quot;)
        os._exit(0)
</code></pre>
<p>And the <code>gitlab-ci.yml</code>:</p>
<pre><code class="language-yaml">stages:
  - build

validate file:
  stage: build
  image: python:3.11
  script:
    - python3 ./scripts/check.py
</code></pre>
<p>If there are any errors they won't get shown before the pipeline fails.</p>
<p>This can be fixed by setting the environment variable <code>PYTHONUNBUFFERED</code> to true or using the <code>-u</code> flag:</p>
<pre><code class="language-yaml">stages:
  - build

validate file:
  stage: build
  image: python:3.11
  script:
    - python3 -u ./scripts/check.py
</code></pre>
<p>The <a href="https://docs.python.org/3/using/cmdline.html#cmdoption-u">explanation</a> is that it &quot;forces the stdout and stderr streams to be unbuffered&quot;.</p>
]]></description>
      <author>johan@pwd.re (Johan Saf)</author>
      <guid isPermaLink="false">4073e299b3427eb80c385814f124348a</guid>
      <dc:creator>Johan Saf</dc:creator>
      <dc:format>text/html</dc:format>
    </item>
  </channel>
</rss>
