Everything Open 2024 – Day 3 talks

Keynote: Intelligent Interfaces: Challenges and Opportunities by Aaron Quigley

  • Eye Tracking of the user
    • DiffDisplays – Eye tracking and when you looked away from a screen it frooze it. When you looked back it gave you a summary/diff of what you missed
    • Bought this down to the widget level, a widget got notification when user looking or away and could decide what to do
  • Change Blindness (different from attention blindness)
    • When phone far away simplify phone interface, more detail when closer
    • People don’t see details of displays slowly fading in and out as distance from display changed
  • Phone on table, screen up or screen down
    • SpeCam – Facedown screen can have light and detect what it is sitting on. Guess material it is sitting on
    • Accuracy same/better than a proper spectrometer
  • MicroCam – Phone placed with screen face up
    • Placement aware computing
  • OmniSense
    • 360 Camera
    • Track what the user’s whole body is doing
    • Tracks what is happening all around the user. Danger sensors, context aware output
  • BreathIn control. Breath pattern to control phone
    • User camera in a watch potion to detect handle gestures (looking at top/back of hand)
  • RotoSwype – Smart ring to do gesture keyboard input
  • RadarCat – Radar + Categorization
    • More Socially acceptable that cameras everywhere and always on
    • Used to detect material
    • Complex pattern of reflection and absorption that returns lots of information
    • Trained on 661 feature and 512 bins
    • Radar signal can ever detect different colours. Different dyes interact differently
    • Can detect if people are wearing gloves
    • Application – Scales at self-checkout supermarket to detect what is being weighed
    • Radar in shoe can recognise the surface and layers below (carpet on weed etc)

Passwordless Linux – Passkey and External IdP support in FreeIPA by Fraser Tweedale

  • Passwords
    • Users are diligent (weak reuse)
    • Using passwords securely imposes friction and cognitive load
    • Phishable
  • Objectives – Reduce password picking risks, phishing, friction,frequency of login
  • Alternatives
    • 2FA, Smartcard, Passkeys / WebAuthn, Web SSO Providers
  • 2FA
    • HOTP / TOTP etc
    • phishable
  • Smart Cards
    • Phishing Resistant
  • Passkeys
    • Better versions of MFA Cards
    • Phishing resistant
    • “passkey” term is a little vague
  • Web SSO
    • SAML, OAuth2
    • Using an existing account to authenticate
    • Some privacy concern
    • Keycloak, Redhat SSO, Okta, Facebook
    • Great on the web, harder in other context
  • What about our workstations?
    • pam has hooks for most of the above (Web SSO less common) or pam_sss does all
  • FreeIPA / Red Hat Identity Management
  • DEMO

Locknote: Who gets to work in STEM? And who is being left out? by Rae Johnston

  • Poor diversity affects the development of AI
  • False identification much higher by facial recognition for non-white people
  • Feed the AI more data sets?
  • Bias might not even be noticed if the developers are not diverse
  • Only around 25% of STEM people are Women
  • Only 15% of UK scientist came from Working Class backgrounds (35% of the population)
  • 11% of Australians don’t have access to affordable Internet or don’t use it.
  • The digital divide is narrowing but getting deeper. Increasing harder to function if you are not online
  • Male STEM graduates are 1.8x more likely to be in jobs that required the array than women. Mush worse for indigenous people

Lightning Talks

  • Creating test networks with Network Namespace
    • ip netns add test-lan
  • Rerap Micron
  • Haystack Storage System
    • Time-bases key/value store
  • AgOpenGPS
    • Self Steering System for Tractors
  • Common Network Myths
    • End to end packet loss is the only thing that matters
    • Single broadcast domain is a SPOF, broadcast storms etc
    • Ping and ICMP is your friend. Please allow ping
    • Don’t force 1500 MTU
    • Asymmetric routing is normal
    • non-standard port number doesn’t make you secure
  • radio:console for remote radio
  • WASM
    • FileSender – Share large datasets over the Internet

Everything Open 2024 – Day 2 talks

Keynote: How Adversaries Use AI by Jana Dekanovska

  • Adversary
    • Nation States
    • Ecrime
    • Hactivism
  • Trends
    • High Profile Ecrime attacks – Ransomware -> Data extortion
    • Malware-Free Attacks – Phish, Social engineering to get in rather than malware
    • Cloud Consciousness
    • Espionage – Focuses in Eastern Europe and Middle East
    • Vulnerability Exploitation – Not just zero days, Takes while to learn to leverage vuls
    • Cloud Consciousness – Adversary knows they are in the cloud, have to operate in it.
  • Generative AI
    • Code Generation
    • Social Engineer – Help people sound like Native Speakers, improve wording
    • Prompt Injection
  • Big Four States sponsoring attacks – China, North Korea, Iran, Russia
  • North Korea – Often after money
  • Russia, Iran – Concentrating on local adversaries
  • China
    • 1m personal in Cyber Security
    • Get as much data as possible
  • Elections
    • Won’t be hacking into voting systems
    • Will be generating news, stories, content and targeting populations
  • Crime Operations
    • GenAI helps efficiency and Speed of attacks
    • Average Breakout time faster from 10h in 2018 to 1h now
    • Members from around the world, at leats one from Australia
    • Using ChatGPT to help out during intrusions to understand what they are seeing
    • Using ChatGPT to generate scripts

Consistent Eventually Replication Database by William Brown

  • Sites go down. Lets have multiple sites for our database
  • CAP Theorem
  • PostgresSQL Database
    • Active Primary + Standby
    • Always Consistent
    • Promote passive to active in event of outage
    • Availability
    • But not partition tolerant
  • etcd
    • Nodes elect active node which handles writes. Passive nodes go offline then others are still happy
    • If active node fails then new active node elected and handles writes
    • Not availbale. Since if only one node then it will go to sleep cause it doesn’t know state of other nodes (dead or just unreachable)
  • Active Directory
    • If node disconnected then it will just keep serving old data
    • reads and writes always services even if they are out of contact with other nodes
    • Not consistent
  • Kanidm
    • identity management database
    • Want availability and partition tolerance
    • Because we want disconnected nodes to still handle reads and writes (eg for branch office that is off internet)
    • Also want to be able to scale very high, single node can’t handle all the writes
  • Building and Design
    • Simultaneous writes have to happen on multiple servers, what happens if writes overlap. Changes to same record on different servers
    • ” What would Postgres do? “
    • Have nanosecond timestamps. Apply events nicely in order, only worry about conflicts. Use Lamport Clock (which only goes forward)
    • What happens if the timestamps match?
    • Servers get a uuid, timestamp gets uuid added to it so one server is slightly newer
    • Both servers can go though process in isolation and get the same outputted database content
  • Lots more stuff but I got lost
    • Attribute State + CRDT
  • Most of your code will be doing weird paths. And they must all be tested.
  • Complaint that academic papers are very hard to read. Difficult to translate into code.

Next Generation Authorisation – a developers guide to Cedar by Ricardo Sueiras

  • Authorisation is hard
  • Ceder
    • DSL around authorisation
    • Policy Language
    • Evaluation and Authorisation Engine
    • Easy to Analise
  • Authorisation Language

Managing the Madness of Cloud Logging by Alistair Chapman

  • The use case
  • All vendors put their logs in weird places and in weird sorts of ways. All differently
  • Different defaults for different events
  • Inconsistent event formats –
  • Changes must be proactive – You have to turn on before you need it
  • Configuration isn’t static – VEndor can change around the format with little worning
  • Very easy to access the platform APIs from a VM.
  • Easy to get on a VM if you have access to the Cloud platform
  • Platform Security Tools
    • Has access to all logs and can correlate events
    • Doesn’t work well if you are not 100% using their product. ie Multi-cloud
    • Can cost a lot, requires agents to be deployed
  • Integrating with your own SIEM platform
    • Hard to push logs out to external sources sometimes
    • Can get all 3 into splunk, loki, elastic
    • You have to duplicate with the cloud provider has already done
  • Assess your requirements
    • How much do you need live correlation vs reviewing after something happened
    • Need to plan ahead
    • OSCF, OTel, ECS – Standards. Pick one and use for everything
    • Try log everything. Audit events, Performance metrics, Billing
    • But obvious lots of logs cost logs of money
    • Make it actionable – Discoverability and correlation. Automation
  • Taming log Chaos
    • Learn from Incidents – What sort of thing happens, what did you need availbale
    • Test assumptions – eg How trusted is “internal”
    • Log your logging – How would you know it is not working
    • Document everything – Make it easier to detect deviations from norm
    • Have processes/standards for the teams generating the events (eg what tags to use)
  • Prioritise common mistakes
    • Opportunity for learning
    • Don’t forget to train the humans
  • Think Holistically
    • App security is more than just code
    • Automation and tooling will help but not solve anything
    • If you don’t have a security plan… Make one
  • Common problems
    • Devs will often post key to github
    • github has a feature to block common keys, must be enabled
  • Summary
    • The logs you gather must be actionable
    • Get familiar with the logs, and verify they actually work they way you think
    • Put the logs in one place if you can
    • Plan for the worst
    • Don’t let the logs overwhelm you. But don’t leave important events unlogged
    • The fewer platforms you use the easier it is

Everything Open 2024 – Day 1 talks

Developing in the open, building a product with our users by Toby Bellwood

  • The Lagoon Story
    • At amazee.io . Is Lagoon Lead
    • What is Lagoon
    • Application to Kubernetes (docker build for customer, converts to k8s)
    • Docker based
    • Based on git workflows. Mostly Drupal, WordPress, PHP and NodeJS apps
    • Presets for the extra stuff like monitoring etc
  • Why
    • Cause Developers are too busy to do all that extra stuff
    • and it means Ops prefer if it was all automated away (the right way)
  • 8 full-time team members
    • Knows a lot about application, not so much about the users (apart from Amazee.io)
    • Users: Hosting providers, Agencies, Developers
    • The Adopter: Someone using it for something else, weird use cases
    • Agencies: Need things to go out quickly, want automation, like documentation to be good. Often will need weird technologies cause customers wants that.
    • Developers: Just want it stabele. Only worried about one project at at time. Often OS minded
  • User Mindset
    • Building own tools using application
    • Do walking tours of the system, recorded zoom session
    • Use developer tools
    • Discord, Slack, Office Hours, Events, Easy Access to the team
  • Balance priorities
    • eg stuff customers will use even those Amazee won’t use
  • Engaging Upstream
    • Try to be a good participant, What they would want their customers to be
    • Encourage our teams to “contribute first”. Usually works well
  • Empowering the Team
    • Contribute under your own name
    • Participate in communities
  • How to stay Open Source forever?
    • Widening the Core Contributor Group
    • Learn from others in the Community. But most companies are not open sourcing the main component of their business.
    • Unsuccessful CNCF Sandbox project

Presenting n3n – A simple Peer to Peer VPN by Hamish Coleman

  • How to compares to other VPNs?
    • Peer to peer
    • NAT piecing
    • Not all packets need to go via the server
    • Distributed ethernet switch – gives extra features
    • Userspace except for tuntap driver which is pretty common
    • Low deployment requirements, easy to install in multiple environments
    • Relatively simple security, not super secure
  • History
    • Based off n2n (developed by the people who did ntop)
    • But they changed the license in October 2023
    • Decided to fork into a new project
    • First release of n3n in April 2024
  • Big change was they introduced a CLA (contributor licensing agreement)
  • CLAs have problems
    • Legal document
    • Needs real day, contributor hostile, asymmetry of power
    • Can lead to surprise relicencing
  • Alternatives to a CLA
  • Preserving Git history
    • Developer’s Certificate of Origin
    • Or it could be a CLA
  • Handling Changes
    • Don’t surprise your Volunteers
    • Don’t ignore your Volunteers
    • Do discuss with you Volunteers and bring them along
  • Alternatives
    • Wireguard – No NAT piercing
    • OpenVPN – Mostly client to Server. Also Too configurable
  • Why prefer
    • One simple access method (Speaker uses 4x OS)
    • A single access method
    • p2p avoid latency delays because local instances to talk directly
  • Goals
    • Protocol compatibility with n2n
    • Don’t break user visible APIs
    • Incrementally clean and improve codebase
  • How it works now
    • Supernode – Central co-ordination point, public IP, Some access control, Last-resort for packet forwarding
    • Communities – Nodes join, form a virtual segment
  • IP addresses
    • Can just run a DHCP server inside the network
  • Design
    • Tries to create a full mesh of nodes
    • Multiple Supernodes for metadata
  • Added a few features from n2n
    • INI file, Help text, Tidied up the CLI options and reduced options
    • Tried to make the defaults work better
  • Built in web server
    • Status page, jsonRPC, Socket interfaces, Monitoring/Stats
  • Current State of fork
    • Still young. Another contributor
    • Only soft announced. Growing base of awareness
  • Plans
    • IPv6
    • Optimise encryption/compression
    • Improve packaging and submit to distros
    • Test coverage
    • Better NAT piercing
    • Continue improve config experience
    • Selectable tuntap drivers
    • Mobile phone support hoped for but probably some distance away
  • Speaker’s uses for software
    • Manage mothers computer
    • Management interface for various servers around the world
    • LAN Gaming using Windows 98 machines
    • Connect back to home network to avoid region blockinghttps://github.com/n42n/n3n
  • https://github.com/n42n/n3n

From the stone age to silicon: The Dwarf Axe guide to the evolution of technology by Steven Ellis

  • What is a “Dwarf Axe” ?
    • Snowflakes vs Dwarf Axes
    • It’s an Axe that handled down and consistently delivers a service
    • Both the head ( software ) and the handle ( hardware ) are maintained and upgraded separately and must be maintained. Treated like the same platform even though it is quite different from what it was originally. Delivers the same services though
  • Keeps a fairly similar services. Same box on a organisation diagram
  • Home IT
    • Phones handed down to family members. Often not getting security patches anymore
  • Enterprise IT
    • Systems kept long past their expected lifetime
    • Maintained via virtualisation
  • What is wrong with a Big Axe?
    • Too Big to Fail
    • Billion dollar projects fail.
  • Alternatives
    • Virtual Machines – Running on Axe somewhere,
    • Containers – Something big to orchestrate the containers
    • Microservices – Also needs orchestration
  • Redesign the Axe
    • The cloud – It’s just someone else Axe
  • Options
    • Everything as a service. 3rd party services
  • Re-use has an end-of-life
    • Modern hardware should have better )and longer) hardware support
  • Ephemeral Abstraction
    • Run anywhere
    • Scale out not up
    • Avoid single points of failure
    • Focus on the service (not the infra or the platform)
    • Use Open tools and approaches
  • Define your SOE
    • Not just your OS

Prometheus node_exporter crashed my server

I am in the middle of upgrading my home monitoring setup. I collect metrics via prometheus and query them with grafana. More details later but yesterday I ran into a little problem that crashed one of my computers.

Part of the prometheus ecosystem is node_exporter . This is a program that runs on every computer and exports cpu, ram, disk, network and other stats of the local machine back to prometheus.

One of my servers is a little HP Microserver gen7 I bought in late-2014 and installed Centos 7 on. It has a boot drive and 4 hard drives with data on it.

An HP Microserver gen7

I noticed this machine wasn’t showing up in the prometheus stats correctly. I logged in and checked and the version of node_exporter was very old and formatting it’s data in an obsolete way. So I download the latest version, copied it over the existing binary and restarted the service…

…and my server promptly crashes. So I reboot the server and it crashes a few seconds after the kernel starts.

Obviously the problem is with the new version of node_exporter. However node_exporter is set to start immediately after boot. So what I have to do is start Linux in “single user mode” ( which doesn’t run any services ) and edit the file that starts node_exporter and then reboot again go get the server up normally without it. I follow this guide for getting into single user mode.

After a big of googling I come across node_exporter bug 903 ( “node_exporter creating ACPI Error with Kernel error log ) which seems similar to what I was seeing. The main difference is that my machine crashed rather than just giving an error. I put that down to my machine running fairly old hardware, firmware and operating systems.

The problem seems to be a bug in HP’s hardware/firmware around some stats that the hardware exports. Since node_exporter is trying to get lots of stats from the hardware including temperature, cpu, clock and power usage it is hitting one of the dodgy interfaces and causing a crash.

The bug suggest disabling the “hwmon” check in node_exporter. I tried this but I was still getting a slightly different crash that looked like clock or cpu frequency. Rather than trying to trace further I disabled all the tests and then enabled the ones I needed one by one until the stats I wanted were populated ( except for uptime, because it turns out the time stats via –collector-time were one thing that killed it ).

So I ended up with the following command line

node_exporter --collector.disable-defaults

which appears to work reliably.


KVM Virtualisation on Ubuntu 22.04

I have been setting up a computer at home to act as a host for virtual machines. The machine is a recycled 10-year-old desktop with 4 cores, 32GB RAM and a 220GB SSD.

Growing the default disk

lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
df -h
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
df -h

Installing kvm and libvirt

Installing packages:

apt install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker 
apt install libvirt-daemon-system virtinst libvirt-clients bridge-utils cloud-utils

Setting up users and starting daemons

systemctl enable --now libvirtd
systemctl start libvirtd
systemctl status libvirtd
usermod -aG libvirt simon

Setting up Networking

I needed to put the instance on a static IP and then create a bridge so any VMs that were launched were on the same network as everything else at home.

I followed these articles

First remove the default networks created by KVM

~# virsh net-destroy default
Network default destroyed

~# virsh net-undefine default
Network default has been undefined

then run “ip add show” to check just the physical network is left

backup and edit file in /etc/netplan ( 00-installer-config.yaml in my case) that has config for the network

Config created by installer:

# This is the network config written by 'subiquity'
dhcp4: true
version: 2

Replacement config:

      dhcp4: false
      dhcp6: false
      interfaces: [enp2s0]
      addresses: []
        - to: default
      mtu: 1500
        addresses: [,]
        stp: true
        forward-delay: 4
      dhcp4: false
      dhcp6: false
  version: 2

Note: The format in the 20.04 doc is slightly out of date (for the default route). Corrected in my file and the following link.

I used yamllint to check the config and “netplan try” and “netplan apply” to update.

Now we can make KVM aware of this bridge. create a scratch XML file called host-bridge.xml and insert the following:

  <forward mode="bridge"/>
  <bridge name="br0"/>

Use the following commands to make that our default bridge for VMs:

virsh net-define host-bridge.xml
virsh net-start host-bridge
virsh net-autostart host-bridge

And then list the networks to confirm it is set to autostart:

$ virsh net-list --all
 Name          State    Autostart   Persistent
 host-bridge   active   yes         yes

Booting a Virtual Machine

Now I want to create a Virtual machine image that I can base others I create off. I followed this guide:

Create Ubuntu 22.04 KVM Guest from a Cloud Image

First I downloaded the jammy-server-cloudimg-amd64.img from cloud-images.ubuntu.com. Note this is the ones that doesn’t have “disk” in it’s name.

~# wget http://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Then I grew the disk image to 10GB and copied it to where libvirt could see it.

~# qemu-img resize jammy-server-cloudimg-amd64.img +8G

~# cp ubuntu-18.04-server-cloudimg-arm64.img /var/lib/libvirt/images/jammy2204.img

Now I need to configure the image, especially with a user and password so I can login. The way to do this is with cloud-init. This is a special file of commands to config a booting virtual machine. The weird thing with KVM is that the file is on a virtual cdrom attached to the virtual machine.

First create the config

    name: simon
    home: /home/simon

password: hunter2
chpasswd: { expire: False }
hostname: ubuntu-22-cloud-image

# configure sshd to allow users logging in using password
# rather than just keys
ssh_pwauth: True

and save as bootconfig.txt . Then convert it to an iso and copy that to the images folder

~# cloud-localds bootconf.iso bootconf.txt
~# cp bootconf.iso /var/lib/libvirt/images/

~# virsh pool-refresh default

Now I run the program virt-manager locally. This is a graphical program that connects from my desktop over ssh to the the KVM server.

I use virt manager to connect to the KVM server and create a new virtual machine

  • Machine Type should be “Ubuutu 22.04 LTS”
  • It should boot off the jammy2204.img disk
  • The bootconf.iso should be attached to the CDROM. But the machine does not need to boot off it.
  • Set networking to be “Virtual network ‘host-bridge’: Bridge Network”

Boot the machine and you should be able to login to the console using the user:password you created in the cloud-config. You can then change passwords, update packages and otherwise configure the instance to you liking. Once you have finished you can shutdown the machine.

To create a new VM you just need to clone the disk:

~# virsh vol-clone --pool default jammy2204.img newvm.img

and then create a new Virtual machine in virt-manager using the disk (no need for the iso since the disk has the correct passwords)


Moving my backups to restic

I’ve recently moved my home backups over to restic . I’m using restic to backup the /etc and /home folders and on all machines are my website files and databases. Media files are backed up separately.

I have around 220 Gigabytes of data, about half of that is photos.

My Home setup

I currently have 4 regularly-used physical machines at home: two desktops, one laptop and server. I also have a VPS hosted at Linode and a VM running on the home server. Everything is running Linux.

Existing Backup Setup

For at least 15 years I’ve been using rsnaphot for backup. rsnapshot works by keeping a local copy of the folders to be backed up. To update the local copy it uses rsync over ssh to pull down a copy from the remote machine. It then keeps multiple old versions of files by making a series of copies.

I’d end up with around 12 older versions of the filesystem (something like 5 daily, 4 weekly and 3 monthly) so I could recover files that had been deleted. To save space rsnapshot uses hard links so only one copy of a file is kept if the contents didn’t change.

I also backed up a copy to external hard drives regularly and kept one copy offsite.

The main problem with rsnapshot was it was a little clunky. It took a long time to run because it copied and deleted a lot of files every time it ran. It also is difficult to exclude folders from being backed up and it is also not compatible with any cloud based filesystems. It also requires ssh keys to login to remote machines as root.

Getting started with restic

I started playing around with restic after seeing some recommendations online. As a single binary with a few commands it seemed a little simpler than other solutions. It has a push model so needs to be on each machine and it will upload from there to the archive.

Restic supports around a dozen storage backends for repositories. These include local file system, sftp and Amazon S3. When you create an archive via “restic init” it creates a simple file structure for the repository in most backends:

You can then use simple commands like “restic backup /etc” to backup files to there. The restic documentation site makes things pretty easy to follow.

Restic automatically encrypts backups and each server needs a key to read/write to it’s backups. However any key can see all files in a repository even those belonging to other hosts.

Backup Strategy with Restic

I decided on the followup strategy for my backups:

  • Make a daily copy of /etc, /home and other files for each machine
  • Keep 5 daily and 3 weekly copies
  • Have one copy of data on Backblaze B2
  • Have another copy on my home server
  • Export the copies on the home server to external disk regularly

Backblaze B2 is very similar Amazon S3 and is supported directly by restic. It is however cheaper. Storage is 0.5 cents per gigabyte/month and downloads are 1 cent per gigabyte. In comparison AWS S3 One Zone Infrequent access charges 1 cent per gigabyte/month for storage and 9 cents per gigabyte for downloads.

WhatBackblaze B2 AWS S3
Store 250 GB per month$1.25$2.50
Download 250 GB$2.50$22.50

AWS S3 Glacier is cheaper for storage but hard to work with and retrieval costs would be even higher.

Backblaze B2 is less reliable than S3 (they had an outage when I was testing) but this isn’t a big problem when I’m using them just for backups.

Setting up Backblaze B2

To setup B2 I went to the website and created an account. I would advise putting in your credit card once you finish initial testing as it will not let you add more than 10GB of data without one.

I then created a private bucket and changed the bucket’s lifecycle settings to only keep the last version.

I decided that for security I would have each server use a separate restic repository. This means that I would use a bit of extra space since restic will only keep one copy of a file that is identical on most machines. I ended up using around 15% more.

For each machine I created an B2 application key and set it to have a namePrefix with the name of the machine. This means that each application key can only see files in it’s own folder

On each machine I installed restic and then created an /etc/restic folder. I then added the file b2_env:

export B2_ACCOUNT_ID=000xxxx
export B2_ACCOUNT_KEY=K000yyyy
export RESTIC_PASSWORD=abcdefghi
export RESTIC_REPOSITORY=b2:restic-bucket:/hostname

You can now just run “restic init” and it should create an empty repository, check via b2 to see.

I then had a simple script that runs:

source /etc/restic/b2_env

restic --limit-upload 2000 backup /home/simon --exclude-file /etc/restic/home_exclude

restic --limit-upload 2000 backup /etc /usr/local /var/lib /var/backups

restic --verbose --keep-last 5 --keep-daily 6 --keep-weekly 3 forget

The “source” command loads in the api key and passwords.

The restic backup lines do the actual backup. I have restricted my upload speed to 20 Megabits/second . The /etc/restic/home_exclude lists folders that shouldn’t be backed up. For this I have:


as these are folders with regularly changing contents that I don’t need to backup.

The “restic forget” command removes older snapshots. I’m telling it to keep 6 daily copies and 3 weekly copies of my data, plus at least the most recent 5 no matter how old then are.

This command doesn’t actually free up the space taken up by the removed snapshots. I need to run the “restic prune” command for that. However according to this analysis the prune operation generates so many API calls and data transfers that the payback time on disk space saved can be months(!). So I only run the command approx once every 45 days. Here is the code for this:

prune_run() {
    echo "Running restic Prune"
    /usr/local/bin/restic prune --cleanup-cache --max-unused 20%
    echo " "
    touch /etc/restic/last_prune_b2
    echo "Updating restic if required"
    echo " "
    /usr/local/bin/restic self-update

prune_check() {
    if [[ ! -f /etc/restic/last_prune_b2 ]]; then
        touch -d "2 days ago" /etc/restic/last_prune_b2

    if [[ $(find /etc/restic/last_prune_b2 -mtime -30 -print) ]]; then
        echo "Last backup was less than 30 days ago so wont run prune"
        echo " "
        echo "Chance of running prune is 1/30"
        RANDOM=$(date +%N | cut -b4-9)
        flip=$((1 + RANDOM %30))
        if [[ $flip  = 15 ]]; then


Setting up sftp

As well as backing up to B2 I wanted to backup my data to my home server. In this case I decided to have a single repository shared by all the servers.

First of all I created a “restic” account on my server with a home of /home/restic. I then created a folder /media/backups/restic owned by the restic user.

I then followed this guide for sftp-only accounts to restrict the restic user. Relevant lines I changed were “Match User restic” and “ChrootDirectory /media/backups/restic”

On each host I also needed to run “cp /etc/ssh/ssh_host_rsa_key /root/.ssh/id_rsa ” and also add the host’s public ssh_key to /home/restic/.ssh/authorized_keys on the server.

Then it is just a case of creating a sftp_env file like in the b2 example above. Except this is a little shorter:

export RESTIC_REPOSITORY=sftp:restic@server.darkmere.gen.nz:shared
export RESTIC_PASSWORD=abcdefgh

For backing up my VPS I had to do another step since this couldn’t push files to my home. What I did was instead add a script that ran on the home server and used rsync to copy down folders from by VPS to local. I used rrsync to restrict this script.

Once I had a local folder I ran “restic –home vps-name backup /copy-of-folder” to backup over sftpd. The –host option made sure the backups were listed for the right machine.

Since the restic folder is just a bunch of files, I’m copying up it directly to external disk which I keep outside the house.

Parting Thoughts

I’m fairly happy with restic so far. I don’t have not run into too many problems or gotchas yet although if you are starting up I’d suggest testing with a small repository to get used to the commands etc.

I have copies of keys in my password manager for recovery.

There are a few things I still have to do including setup up some monitoring and also decide how often to run the prune operation.


Donations 2020

Each year I do the majority of my Charity donations in early December (just after my birthday) spread over a few days (so as not to get my credit card suspended).

I also blog about it to hopefully inspire others. See: 2019, 2018, 2017, 2016, 2015

All amounts this year are in $US unless otherwise stated

My main donations was $750 to Givewell (to allocate to projects as they prioritize). Once again I’m happy that Givewell make efficient use of money donated. I decided this year to give a higher proportion of my giving to them than last year.

Software and Internet Infrastructure Projects

€20 to Syncthing which I’ve started to use instead of Dropbox.

$50 each to the Software Freedom Conservancy and Software in the Public Interest . Money not attached to any specific project.

$51 to the Internet Archive

$25 to Let’s Encrypt

Advocacy Organisations

$50 to the Electronic Frontier Foundation

Others including content creators

I donated $103 to Signum University to cover Corey Olsen’s Exploring the Lord of the Rings series plus other stuff I listen to that they put out.

I paid $100 to be a supporter of NZ News site The Spinoff

I also supported a number of creators on Patreon:


Talks from KubeCon + CloudNativeCon Europe 2020 – Part 1

Various talks I watched from their YouTube playlist.

Application Autoscaling Made Easy With Kubernetes Event-Driven Autoscaling (KEDA) – Tom Kerkhove

I’ve been using Keda a little bit at work. Good way to scale on random stuff. At work I’m scaling pods against length of AWS SQS Queues and as a cron. Lots of other options. This talk is a 9 minute intro. A bit hard to read the small font on the screen of this talk.

Autoscaling at Scale: How We Manage Capacity @ Zalando – Mikkel Larsen, Zalando SE

  • These guys have their own HPA replacement for scaling. Kube-metrics-adapter .
  • Outlines some new stuff in scaling in 1.18 and 1.19.
  • They also have a fork of the Cluster Autoscaler (although some of what it seems to duplicate Amazon Fleets).
  • Have up to 1000 nodes in some of their clusters. Have to play with address space per nodes, also scale their control plan nodes vertically (control plan autoscaler).
  • Use Virtical Pod autoscaler especially for things like prometheus that varies by the size of the cluster. Have had problems with it scaling down too fast. They have some of their own custom changes in a fork

Keynote: Observing Kubernetes Without Losing Your Mind – Vicki Cheung

  • Lots of metrics dont’t cover what you want and get hard to maintain and complex
  • Monitor core user workflows (ie just test a pod launch and stop)
  • Tiny tools
    • 1 watches for events on cluster and logs them -> elastic
    • 2 watches container events -> elastic
    • End up with one timeline for a deploy/job covering everything
    • Empowers users to do their own debugging

Autoscaling and Cost Optimization on Kubernetes: From 0 to 100 – Guy Templeton & Jiaxin Shan

  • Intro to HPA and metric types. Plus some of the newer stuff like multiple metrics
  • Vertical pod autoscaler. Good for single pod deployments. Doesn’t work will with JVM based workloads.
  • Cluster Autoscaler.
    • A few things like using prestop hooks to give pods time to shutdown
    • pod priorties for scaling.
    • –expandable-pods-priority-cutoff to not expand for low-priority jobs
    • Using the priority-expander to try and expand spots first and then fallback to more expensive node types
    • Using mixed instance policy with AWS . Lots of instance types (same CPU/RAM though) to choose from.
    • Look at poddistruptionbudget
    • Some other CA flags like scale-down-utilisation-threshold to lok at.
  • Mention of Keda
  • Best return is probably tuning HPA
  • There is also another similar talk . Note the Male Speaker talks very slow so crank up the speed.

Keynote: Building a Service Mesh From Scratch – The Pinterest Story – Derek Argueta

  • Changed to Envoy as a http proxy for incoming
  • Wrote own extension to make feature complete
  • Also another project migrating to mTLS
    • Huge amount of work for Java.
    • Lots of work to repeat for other languages
    • Looked at getting Envoy to do the work
    • Ingress LB -> Inbound Proxy -> App
  • Used j2 to build the Static config (with checking, tests, validation)
  • Rolled out to put envoy in front of other services with good TLS termination default settings
  • Extra Mesh Use Cases
    • Infrastructure-specific routing
    • SLI Monitoring
    • http cookie monitoring
  • Became a platform that people wanted to use.
  • Solving one problem first and incrementally using other things. Many groups had similar problems. “Just a node in a mesh”.

Improving the Performance of Your Kubernetes Cluster – Priya Wadhwa, Google

  • Tools – Mostly tested locally with Minikube (she is a Minikube maintainer)
  • Minikube pause – Pause the Kubernetes systems processes and leave app running, good if cluster isn’t changing.
  • Looked at some articles from Brendon Gregg
  • Ran USE Method against Minikube
  • eBPF BCC tools against Minikube
  • biosnoop – noticed lots of writes from etcd
  • KVM Flamegraph – Lots of calls from ioctl
  • Theory that etcd writes might be a big contributor
  • How to tune etcd writes ( updated –snapshot-count flag to various numbers but didn’t seem to help)
  • Noticed CPU spkies every few seconds
  • “pidstat 1 60” . Noticed kubectl command running often. Running “kubectl apply addons” regularly
  • Suspected addon manager running often
  • Could increase addon manager polltime but then addons would take a while to show up.
  • But in Minikube not a problem cause minikube knows when new addons added so can run the addon manager directly rather than it polling.
  • 32% reduction in overhead from turning off addon polling
  • Also reduced coredns number to one.
  • pprof – go tool
  • kube-apiserver pprof data
  • Spending lots of times dealing with incoming requests
  • Lots of requests from kube-controller-manager and kube-scheduler around leader-election
  • But Minikube is only running one of each. No need to elect a leader!
  • Flag to turn both off –leader-elect=false
  • 18% reduction from reducing coredns to 1 and turning leader election off.
  • Back to looking at etcd overhead with pprof
  • writeFrameAsync in http calls
  • Theory could increase –proxy-refresh-interval from 30s up to 120s. Good value at 70s but unsure what behavior was. Asked and didn’t appear to be a big problem.
  • 4% reduction in overhead

Linkedin putting pressure on users to enable location tracking

I got this email from Linkedin this morning. It is telling me that they are going to change my location from “Auckland, New Zealand” to “Auckland, Auckland, New Zealand“.

Email from Linkedin on 30 August 2020

Since “Auckland, Auckland, New Zealand” sounds stupid to New Zealanders (Auckland is pretty much a big city with a single job market and is not a state or similar) I clicked on the link and opened the application to stick with what I currently have

Except the problem is that the pulldown doesn’t offer many any other locations

The only way to change the location is to click “use Current Location” and then allow Linkedin to access my device’s location.

According to the help page:

By default, the location on your profile will be suggested based on the postal code you provided in the past, either when you set up your profile or last edited your location. However, you can manually update the location on your LinkedIn profile to display a different location.

but it appears the manual method is disabled. I am guessing they have a fixed list of locations in my postcode and this can’t be changed.

So it appears that my options are to accept Linkedin’s crappy name for my location (Other NZers have posted problems with their location naming) or to allow Linkedin to spy on my location and it’ll probably still assign the same dumb name.

The basically appears to be a way for Linkedin to push user to enable location tracking. While at the same time they get to force their own ideas on how New Zealand locations work on users.


Sidewalk Delivery Robots: An Introduction to Technology and Vendors

At the start of 2011 Uber was in one city (San Francisco). Just 3 years later it was in hundreds of cities worldwide including Auckland and Wellington. Dockless Electric Scooters took only a year from their first launch to reach New Zealand. In both cases the quick rollout in cities left the public, competitors and regulators scrambling to adapt.

Delivery Robots could be the next major wave to rollout worldwide and disrupt existing industries. Like driverless cars these are being worked on by several companies but unlike driverless cars they are delivering real packages for real users in several cities already.

Note: I plan to cover other aspects of Sidewalk Delivery Robots including their impact of society in a followup article.

What are Delivery Robots?

Delivery Robots are driverless vehicles/drones that cover the last mile. They are loaded with a cargo and then will go to a final destination where they are unloaded by the customer.

Indoor Robots are designed to operate within a building. An example of these is The Keenon Peanut. These deliver items to guests in hotels or restaurants . They allow delivery companies to leave food and other items with the robot at the entrance/lobby of a building rather than going all the way to a customer’s room or apartment.

Keenon Peanut

Flying Delivery Drones are being tested by several companies. Wing which is owned by Google’s parent company Alphabet, is testing in Canberra, Australia. Amazon also had a product called Amazon Prime Air which appears to have been shelved.

Wing Flying Robot

The next size up are sidewalk delivery robots which I’ll be concentrating on in the article. Best known of these is Starship Technologies but there is also Kiwi and Amazon Scout. These are designed to drive at slow speeds on the footpaths rather than mix with cars and other vehicles on the road. They cross roads at standard crossings.

Starship Delivery Robot
Amazon Scout

Finally some companies are rolling out Car sized Delivery Robots designed to drive on roads and mix with normal vehicles. The REV-1 from Reflection AI is at the smaller end with company videos showing it using both car and bike lanes. Larger is the Small-Car sized Nuro.


Sidewalk Delivery Robots

I’ll concentrate most on Sidewalk Delivery Robots in this article because I believe they are the most mature and likeliest to have an effect on society in the short term (next 2-10 years).

  • In-building bots are a fairly niche product that most people won’t interact with regularly.
  • Flying Drones are close to working but it it seems to be some time before they can function safely in a built-up environment and autonomously. Cargo capacity is currently limited in most models and larger units will bring new problems.
  • Car (or motorbike) sized bots have the same problems as driverless cars. They have to drive fast and be fully autonomous in all sorts of road conditions. No time to ask for human help, a vehicle on the road will at best block traffic or at potentially be involved in an accident. These stringent requirements mean widespread deployment is probably at least 10 years away.

Sidewalk bots are much further along in their evolution and they have simpler problems to solve.

  • A small vehicle that can carry a takeaway or small grocery order is buildable using today’s technology and not too expensive.
  • Footpaths exist most places they need to go.
  • Walking pace ( up to 6km/h ) is fast enough to be good enough even for hot food.
  • Ubiquitous wireless connectivity enables the robots to be controlled remotely if they cannot handle a situation automatically.
  • Everything unfolds slower on the sidewalk. If a sidewalk bot encounters a problem it can just slow to a stop and wait for remote help. If that process takes 20 seconds then it is usually no problem.

Starship Technologies

Starship are the best known vendor and most advanced vendor in the sector. They launched in 2015 and have a good publicity team.

In late 2019 Starship announced a major rollout to US university campuseswith their abundance of walking paths, well-defined boundaries, and smartphone-using, delivery-minded student bodies“. Campuses include The University of Mississippi and Bowling Green State University .

The push into college campuses was unluckily timed with many being closed in 2020 due to Covid-19. Starship has increased delivery areas outside of campus in some places to try and compensate. It has also seen a doubling of demand in Milton Keynes. However the company has laid of some workers in March 2020.



Kiwibot is one of the few other companies that has gone beyond the prototype stage to servicing actual customers. It is some way behind Starship with the robots being less autonomous and needing more onsite helpers.

  • Based in Columbia with a major deployment in Berkley, California around the UCB campus area
  • Robots cost $US 3,500 each
  • Smaller than Starship with just 1 cubic foot of capacity. Range and speed reportedly lower
  • Guided by remote control using way-points by operators in Medellín, Colombia. Each operator can control up to 3 bots.
  • On-site operators in Berkley maintain robots (and rescue them when they get stuck).
  • Some orders delivered wholly or partially by humans
  • Concentrating on the Restaurant delivery market
  • Costs for the Business
    • Lease/rent starts at $20/day per robot
    • Order capacity 6-18/day per Robot depending on demand & distance.
    • Order fees are $1.99/order with 1-4 Kiwibots leased
    • or $0.99/order if you have 5-10 Kiwibots leased
  • Website, Kiwibot Campus Vision Video , Kiwibot end-2019 post

An interesting feature is that Kiwibot publish their prices for businesses and provide a calculator with which you can calculate break-even points for robot delivery.

As with Starship, Kiwibot was hit by Covid19 closing College campuses. In July 2020 they announced a rollout in the city of San Jose, California in partnership with Shopify and Ordermark. The company is trying to pivot towards just building the robot infrastructure and partner with companies that already have that [marketplace] in mind. They are also trying to crowdfund for investment money.

Amazon Scout

Amazon Scout

Amazon are only slowly rolling out their Scout Robots. It is similar in appearance to the Starship Robots vehicle but is larger.

  • Announced in January 2019
  • Weight “about 100 pounds” (50 kg). No further specs available.
  • A video of the development team at Amazon
  • Initially delivering to Snohomish County, Washington near Amazon HQ
  • Added Irvine, California in August 2019 but still supervised by human
  • In July 2020 announced rollouts in Atlanta, Georgia and Franklin, Tennessee, but still “initially be accompanied by an Amazon Scout Ambassador”.

Other Companies

There are several other companies also working on Sidewalk Delivery Robots. The most advanced are Restaurant Delivery Company Postmates (now owned by Uber) has their own robot called Serve which is in early testing. Video of it on the street.

Several other companies have also announced projects. None appear to be near rolling out to live customers though.

Business Model and Markets

At present Starship and Kiwi are mainly targeting the restaurant deliver market against established players such as Uber Eats. Reasons for going for this market include

  • Established market, not something new
  • Short distances and small cargo
  • Customers unload produce quickly product so no waiting around
  • Charges by existing players quite high. Ballpark costs of $5 to the customer (plus a tip in some countries) and the restaurant also being charged 30% of the bill
  • Even with the high charges drivers end up making only around minimum wage.
  • The current business model is only just working. While customers find it convenient and the delivery cost reasonable, restaurants and drivers are struggling to make money.

Starship and Amazon are also targeting the general delivery market. This requires higher capacity and also customers may not be home when the vehicle arrives. However it may be the case that if vehicles are cheap enough they could just wait till the customer gets home.

Still more to cover

This article as just a quick introduction of the Sidewalk Delivery Robots out there. I hope to do a later post covering more including what the technology will mean for the delivery industry and for other sidewalk users as well as society in general.