Developing in the open, building a product with our users by Toby Bellwood
The Lagoon Story
At amazee.io . Is Lagoon Lead
What is Lagoon
Application to Kubernetes (docker build for customer, converts to k8s)
Docker based
Based on git workflows. Mostly Drupal, WordPress, PHP and NodeJS apps
Presets for the extra stuff like monitoring etc
Why
Cause Developers are too busy to do all that extra stuff
and it means Ops prefer if it was all automated away (the right way)
8 full-time team members
Knows a lot about application, not so much about the users (apart from Amazee.io)
Users: Hosting providers, Agencies, Developers
The Adopter: Someone using it for something else, weird use cases
Agencies: Need things to go out quickly, want automation, like documentation to be good. Often will need weird technologies cause customers wants that.
Developers: Just want it stabele. Only worried about one project at at time. Often OS minded
User Mindset
Building own tools using application
Do walking tours of the system, recorded zoom session
Use developer tools
Discord, Slack, Office Hours, Events, Easy Access to the team
Balance priorities
eg stuff customers will use even those Amazee won’t use
Engaging Upstream
Try to be a good participant, What they would want their customers to be
Encourage our teams to “contribute first”. Usually works well
Empowering the Team
Contribute under your own name
Participate in communities
How to stay Open Source forever?
Widening the Core Contributor Group
Learn from others in the Community. But most companies are not open sourcing the main component of their business.
Unsuccessful CNCF Sandbox project
Presenting n3n – A simple Peer to Peer VPN by Hamish Coleman
How to compares to other VPNs?
Peer to peer
NAT piecing
Not all packets need to go via the server
Distributed ethernet switch – gives extra features
Userspace except for tuntap driver which is pretty common
Low deployment requirements, easy to install in multiple environments
Relatively simple security, not super secure
History
Based off n2n (developed by the people who did ntop)
But they changed the license in October 2023
Decided to fork into a new project
First release of n3n in April 2024
Big change was they introduced a CLA (contributor licensing agreement)
CLAs have problems
Legal document
Needs real day, contributor hostile, asymmetry of power
Can lead to surprise relicencing
Alternatives to a CLA
Preserving Git history
Developer’s Certificate of Origin
Or it could be a CLA
Handling Changes
Don’t surprise your Volunteers
Don’t ignore your Volunteers
Do discuss with you Volunteers and bring them along
Alternatives
Wireguard – No NAT piercing
OpenVPN – Mostly client to Server. Also Too configurable
Why prefer
One simple access method (Speaker uses 4x OS)
A single access method
p2p avoid latency delays because local instances to talk directly
Goals
Protocol compatibility with n2n
Don’t break user visible APIs
Incrementally clean and improve codebase
How it works now
Supernode – Central co-ordination point, public IP, Some access control, Last-resort for packet forwarding
Communities – Nodes join, form a virtual segment
IP addresses
Can just run a DHCP server inside the network
Design
Tries to create a full mesh of nodes
Multiple Supernodes for metadata
Added a few features from n2n
INI file, Help text, Tidied up the CLI options and reduced options
Tried to make the defaults work better
Built in web server
Status page, jsonRPC, Socket interfaces, Monitoring/Stats
Current State of fork
Still young. Another contributor
Only soft announced. Growing base of awareness
Plans
IPv6
Optimise encryption/compression
Improve packaging and submit to distros
Test coverage
Better NAT piercing
Continue improve config experience
Selectable tuntap drivers
Mobile phone support hoped for but probably some distance away
Speaker’s uses for software
Manage mothers computer
Management interface for various servers around the world
From the stone age to silicon: The Dwarf Axe guide to the evolution of technology by Steven Ellis
What is a “Dwarf Axe” ?
Snowflakes vs Dwarf Axes
It’s an Axe that handled down and consistently delivers a service
Both the head ( software ) and the handle ( hardware ) are maintained and upgraded separately and must be maintained. Treated like the same platform even though it is quite different from what it was originally. Delivers the same services though
Keeps a fairly similar services. Same box on a organisation diagram
Home IT
Phones handed down to family members. Often not getting security patches anymore
Enterprise IT
Systems kept long past their expected lifetime
Maintained via virtualisation
What is wrong with a Big Axe?
Too Big to Fail
Billion dollar projects fail.
Alternatives
Virtual Machines – Running on Axe somewhere,
Containers – Something big to orchestrate the containers
Microservices – Also needs orchestration
Redesign the Axe
The cloud – It’s just someone else Axe
Options
Everything as a service. 3rd party services
Re-use has an end-of-life
Modern hardware should have better )and longer) hardware support
Ephemeral Abstraction
Run anywhere
Scale out not up
Avoid single points of failure
Focus on the service (not the infra or the platform)
I am in the middle of upgrading my home monitoring setup. I collect metrics via prometheus and query them with grafana. More details later but yesterday I ran into a little problem that crashed one of my computers.
Part of the prometheus ecosystem is node_exporter . This is a program that runs on every computer and exports cpu, ram, disk, network and other stats of the local machine back to prometheus.
One of my servers is a little HP Microserver gen7 I bought in late-2014 and installed Centos 7 on. It has a boot drive and 4 hard drives with data on it.
I noticed this machine wasn’t showing up in the prometheus stats correctly. I logged in and checked and the version of node_exporter was very old and formatting it’s data in an obsolete way. So I download the latest version, copied it over the existing binary and restarted the service…
…and my server promptly crashes. So I reboot the server and it crashes a few seconds after the kernel starts.
Obviously the problem is with the new version of node_exporter. However node_exporter is set to start immediately after boot. So what I have to do is start Linux in “single user mode” ( which doesn’t run any services ) and edit the file that starts node_exporter and then reboot again go get the server up normally without it. I follow this guide for getting into single user mode.
After a big of googling I come across node_exporter bug 903 ( “node_exporter creating ACPI Error with Kernel error log ) which seems similar to what I was seeing. The main difference is that my machine crashed rather than just giving an error. I put that down to my machine running fairly old hardware, firmware and operating systems.
The problem seems to be a bug in HP’s hardware/firmware around some stats that the hardware exports. Since node_exporter is trying to get lots of stats from the hardware including temperature, cpu, clock and power usage it is hitting one of the dodgy interfaces and causing a crash.
The bug suggest disabling the “hwmon” check in node_exporter. I tried this but I was still getting a slightly different crash that looked like clock or cpu frequency. Rather than trying to trace further I disabled all the tests and then enabled the ones I needed one by one until the stats I wanted were populated ( except for uptime, because it turns out the time stats via –collector-time were one thing that killed it ).
I have been setting up a computer at home to act as a host for virtual machines. The machine is a recycled 10-year-old desktop with 4 cores, 32GB RAM and a 220GB SSD.
systemctl enable --now libvirtd
systemctl start libvirtd
systemctl status libvirtd
usermod -aG libvirt simon
Setting up Networking
I needed to put the instance on a static IP and then create a bridge so any VMs that were launched were on the same network as everything else at home.
Now I need to configure the image, especially with a user and password so I can login. The way to do this is with cloud-init. This is a special file of commands to config a booting virtual machine. The weird thing with KVM is that the file is on a virtual cdrom attached to the virtual machine.
First create the config
#cloud-config
system_info:
default_user:
name: simon
home: /home/simon
password: hunter2
chpasswd: { expire: False }
hostname: ubuntu-22-cloud-image
# configure sshd to allow users logging in using password
# rather than just keys
ssh_pwauth: True
and save as bootconfig.txt . Then convert it to an iso and copy that to the images folder
Now I run the program virt-manager locally. This is a graphical program that connects from my desktop over ssh to the the KVM server.
I use virt manager to connect to the KVM server and create a new virtual machine
Machine Type should be “Ubuutu 22.04 LTS”
It should boot off the jammy2204.img disk
The bootconf.iso should be attached to the CDROM. But the machine does not need to boot off it.
Set networking to be “Virtual network ‘host-bridge’: Bridge Network”
Boot the machine and you should be able to login to the console using the user:password you created in the cloud-config. You can then change passwords, update packages and otherwise configure the instance to you liking. Once you have finished you can shutdown the machine.
To create a new VM you just need to clone the disk:
I’ve recently moved my home backups over to restic . I’m using restic to backup the /etc and /home folders and on all machines are my website files and databases. Media files are backed up separately.
I have around 220 Gigabytes of data, about half of that is photos.
My Home setup
I currently have 4 regularly-used physical machines at home: two desktops, one laptop and server. I also have a VPS hosted at Linode and a VM running on the home server. Everything is running Linux.
Existing Backup Setup
For at least 15 years I’ve been using rsnaphot for backup. rsnapshot works by keeping a local copy of the folders to be backed up. To update the local copy it uses rsync over ssh to pull down a copy from the remote machine. It then keeps multiple old versions of files by making a series of copies.
I’d end up with around 12 older versions of the filesystem (something like 5 daily, 4 weekly and 3 monthly) so I could recover files that had been deleted. To save space rsnapshot uses hard links so only one copy of a file is kept if the contents didn’t change.
I also backed up a copy to external hard drives regularly and kept one copy offsite.
The main problem with rsnapshot was it was a little clunky. It took a long time to run because it copied and deleted a lot of files every time it ran. It also is difficult to exclude folders from being backed up and it is also not compatible with any cloud based filesystems. It also requires ssh keys to login to remote machines as root.
Getting started with restic
I started playing around with restic after seeing some recommendations online. As a single binary with a few commands it seemed a little simpler than other solutions. It has a push model so needs to be on each machine and it will upload from there to the archive.
Restic supports around a dozen storage backends for repositories. These include local file system, sftp and Amazon S3. When you create an archive via “restic init” it creates a simple file structure for the repository in most backends:
You can then use simple commands like “restic backup /etc” to backup files to there. The restic documentation site makes things pretty easy to follow.
Restic automatically encrypts backups and each server needs a key to read/write to it’s backups. However any key can see all files in a repository even those belonging to other hosts.
Backup Strategy with Restic
I decided on the followup strategy for my backups:
Make a daily copy of /etc, /home and other files for each machine
Keep 5 daily and 3 weekly copies
Have one copy of data on Backblaze B2
Have another copy on my home server
Export the copies on the home server to external disk regularly
Backblaze B2 is very similar Amazon S3 and is supported directly by restic. It is however cheaper. Storage is 0.5 cents per gigabyte/month and downloads are 1 cent per gigabyte. In comparison AWS S3 One Zone Infrequent access charges 1 cent per gigabyte/month for storage and 9 cents per gigabyte for downloads.
What
Backblaze B2
AWS S3
Store 250 GB per month
$1.25
$2.50
Download 250 GB
$2.50
$22.50
AWS S3 Glacier is cheaper for storage but hard to work with and retrieval costs would be even higher.
Backblaze B2 is less reliable than S3 (they had an outage when I was testing) but this isn’t a big problem when I’m using them just for backups.
Setting up Backblaze B2
To setup B2 I went to the website and created an account. I would advise putting in your credit card once you finish initial testing as it will not let you add more than 10GB of data without one.
I decided that for security I would have each server use a separate restic repository. This means that I would use a bit of extra space since restic will only keep one copy of a file that is identical on most machines. I ended up using around 15% more.
For each machine I created an B2 application key and set it to have a namePrefix with the name of the machine. This means that each application key can only see files in it’s own folder
On each machine I installed restic and then created an /etc/restic folder. I then added the file b2_env:
The “source” command loads in the api key and passwords.
The restic backup lines do the actual backup. I have restricted my upload speed to 20 Megabits/second . The /etc/restic/home_exclude lists folders that shouldn’t be backed up. For this I have:
as these are folders with regularly changing contents that I don’t need to backup.
The “restic forget” command removes older snapshots. I’m telling it to keep 6 daily copies and 3 weekly copies of my data, plus at least the most recent 5 no matter how old then are.
This command doesn’t actually free up the space taken up by the removed snapshots. I need to run the “restic prune” command for that. However according to this analysis the prune operation generates so many API calls and data transfers that the payback time on disk space saved can be months(!). So I only run the command approx once every 45 days. Here is the code for this:
prune_run() {
echo "Running restic Prune"
/usr/local/bin/restic prune --cleanup-cache --max-unused 20%
echo " "
touch /etc/restic/last_prune_b2
echo "Updating restic if required"
echo " "
/usr/local/bin/restic self-update
}
prune_check() {
if [[ ! -f /etc/restic/last_prune_b2 ]]; then
touch -d "2 days ago" /etc/restic/last_prune_b2
fi
if [[ $(find /etc/restic/last_prune_b2 -mtime -30 -print) ]]; then
echo "Last backup was less than 30 days ago so wont run prune"
echo " "
else
echo "Chance of running prune is 1/30"
RANDOM=$(date +%N | cut -b4-9)
flip=$((1 + RANDOM %30))
if [[ $flip = 15 ]]; then
prune_run
fi
fi
}
prune_check
Setting up sftp
As well as backing up to B2 I wanted to backup my data to my home server. In this case I decided to have a single repository shared by all the servers.
First of all I created a “restic” account on my server with a home of /home/restic. I then created a folder /media/backups/restic owned by the restic user.
I then followed this guide for sftp-only accounts to restrict the restic user. Relevant lines I changed were “Match User restic” and “ChrootDirectory /media/backups/restic”
On each host I also needed to run “cp /etc/ssh/ssh_host_rsa_key /root/.ssh/id_rsa ” and also add the host’s public ssh_key to /home/restic/.ssh/authorized_keys on the server.
Then it is just a case of creating a sftp_env file like in the b2 example above. Except this is a little shorter:
For backing up my VPS I had to do another step since this couldn’t push files to my home. What I did was instead add a script that ran on the home server and used rsync to copy down folders from by VPS to local. I used rrsync to restrict this script.
Once I had a local folder I ran “restic –home vps-name backup /copy-of-folder” to backup over sftpd. The –host option made sure the backups were listed for the right machine.
Since the restic folder is just a bunch of files, I’m copying up it directly to external disk which I keep outside the house.
Parting Thoughts
I’m fairly happy with restic so far. I don’t have not run into too many problems or gotchas yet although if you are starting up I’d suggest testing with a small repository to get used to the commands etc.
I have copies of keys in my password manager for recovery.
There are a few things I still have to do including setup up some monitoring and also decide how often to run the prune operation.
Each year I do the majority of my Charity donations in early December (just after my birthday) spread over a few days (so as not to get my credit card suspended).
I also blog about it to hopefully inspire others. See: 2019, 2018, 2017, 2016, 2015
All amounts this year are in $US unless otherwise stated
My main donations was $750 to Givewell (to allocate to projects as they prioritize). Once again I’m happy that Givewell make efficient use of money donated. I decided this year to give a higher proportion of my giving to them than last year.
Software and Internet Infrastructure Projects
€20 to Syncthing which I’ve started to use instead of Dropbox.
I’ve been using Keda a little bit at work. Good way to scale on random stuff. At work I’m scaling pods against length of AWS SQS Queues and as a cron. Lots of other options. This talk is a 9 minute intro. A bit hard to read the small font on the screen of this talk.
Outlines some new stuff in scaling in 1.18 and 1.19.
They also have a fork of the Cluster Autoscaler (although some of what it seems to duplicate Amazon Fleets).
Have up to 1000 nodes in some of their clusters. Have to play with address space per nodes, also scale their control plan nodes vertically (control plan autoscaler).
Use Virtical Pod autoscaler especially for things like prometheus that varies by the size of the cluster. Have had problems with it scaling down too fast. They have some of their own custom changes in a fork
Could increase addon manager polltime but then addons would take a while to show up.
But in Minikube not a problem cause minikube knows when new addons added so can run the addon manager directly rather than it polling.
32% reduction in overhead from turning off addon polling
Also reduced coredns number to one.
pprof – go tool
kube-apiserver pprof data
Spending lots of times dealing with incoming requests
Lots of requests from kube-controller-manager and kube-scheduler around leader-election
But Minikube is only running one of each. No need to elect a leader!
Flag to turn both off –leader-elect=false
18% reduction from reducing coredns to 1 and turning leader election off.
Back to looking at etcd overhead with pprof
writeFrameAsync in http calls
Theory could increase –proxy-refresh-interval from 30s up to 120s. Good value at 70s but unsure what behavior was. Asked and didn’t appear to be a big problem.
I got this email from Linkedin this morning. It is telling me that they are going to change my location from “Auckland, New Zealand” to “Auckland, Auckland, New Zealand“.
Since “Auckland, Auckland, New Zealand” sounds stupid to New Zealanders (Auckland is pretty much a big city with a single job market and is not a state or similar) I clicked on the link and opened the application to stick with what I currently have
Except the problem is that the pulldown doesn’t offer many any other locations
The only way to change the location is to click “use Current Location” and then allow Linkedin to access my device’s location.
By default, the location on your profile will be suggested based on the postal code you provided in the past, either when you set up your profile or last edited your location. However, you can manually update the location on your LinkedIn profile to display a different location.
but it appears the manual method is disabled. I am guessing they have a fixed list of locations in my postcode and this can’t be changed.
So it appears that my options are to accept Linkedin’s crappy name for my location (Other NZers have posted problems with their location naming) or to allow Linkedin to spy on my location and it’ll probably still assign the same dumb name.
The basically appears to be a way for Linkedin to push user to enable location tracking. While at the same time they get to force their own ideas on how New Zealand locations work on users.
At the start of 2011 Uber was in one city (San Francisco). Just 3 years later it was in hundreds of cities worldwide including Auckland and Wellington. Dockless Electric Scooters took only a year from their first launch to reach New Zealand. In both cases the quick rollout in cities left the public, competitors and regulators scrambling to adapt.
Delivery Robots could be the next major wave to rollout worldwide and disrupt existing industries. Like driverless cars these are being worked on by several companies but unlike driverless cars they are delivering real packages for real users in several cities already.
Note: I plan to cover other aspects of Sidewalk Delivery Robots including their impact of society in a followup article.
What are Delivery Robots?
Delivery Robots are driverless vehicles/drones that cover the last mile. They are loaded with a cargo and then will go to a final destination where they are unloaded by the customer.
Indoor Robots are designed to operate within a building. An example of these is The Keenon Peanut.These deliver items to guests in hotels or restaurants . They allow delivery companies to leave food and other items with the robot at the entrance/lobby of a building rather than going all the way to a customer’s room or apartment.
The next size up are sidewalk delivery robots which I’ll be concentrating on in the article. Best known of these is Starship Technologies but there is also Kiwi and Amazon Scout. These are designed to drive at slow speeds on the footpaths rather than mix with cars and other vehicles on the road. They cross roads at standard crossings.
Finally some companies are rolling out Car sized Delivery Robots designed to drive on roads and mix with normal vehicles. The REV-1 from Reflection AI is at the smaller end with company videos showing it using both car and bike lanes. Larger is the Small-Car sized Nuro.
Sidewalk Delivery Robots
I’ll concentrate most on Sidewalk Delivery Robots in this article because I believe they are the most mature and likeliest to have an effect on society in the short term (next 2-10 years).
In-building bots are a fairly niche product that most people won’t interact with regularly.
Flying Drones are close to working but it it seems to be some time before they can function safely in a built-up environment and autonomously. Cargo capacity is currently limited in most models and larger units will bring new problems.
Car (or motorbike) sized bots have the same problems as driverless cars. They have to drive fast and be fully autonomous in all sorts of road conditions. No time to ask for human help, a vehicle on the road will at best block traffic or at potentially be involved in an accident. These stringent requirements mean widespread deployment is probably at least 10 years away.
Sidewalk bots are much further along in their evolution and they have simpler problems to solve.
A small vehicle that can carry a takeaway or small grocery order is buildable using today’s technology and not too expensive.
Footpaths exist most places they need to go.
Walking pace ( up to 6km/h ) is fast enough to be good enough even for hot food.
Ubiquitous wireless connectivity enables the robots to be controlled remotely if they cannot handle a situation automatically.
Everything unfolds slower on the sidewalk. If a sidewalk bot encounters a problem it can just slow to a stop and wait for remote help. If that process takes 20 seconds then it is usually no problem.
Starship Technologies
Starship are the best known vendor and most advanced vendor in the sector. They launched in 2015 and have a good publicity team.
The push into college campuses was unluckily timed with many being closed in 2020 due to Covid-19. Starship has increased delivery areas outside of campus in some places to try and compensate. It has also seen a doubling of demand in Milton Keynes. However the company has laid of some workers in March 2020.
Kiwibot
Kiwibot is one of the few other companies that has gone beyond the prototype stage to servicing actual customers. It is some way behind Starship with the robots being less autonomous and needing more onsite helpers.
Based in Columbia with a major deployment in Berkley, California around the UCB campus area
Robots cost $US 3,500 each
Smaller than Starship with just 1 cubic foot of capacity. Range and speed reportedly lower
Guided by remote control using way-points by operators in Medellín, Colombia. Each operator can control up to 3 bots.
In July 2020 announced rollouts in Atlanta, Georgia and Franklin, Tennessee, but still “initially be accompanied by an Amazon Scout Ambassador”.
Other Companies
There are several other companies also working on Sidewalk Delivery Robots. The most advanced are Restaurant Delivery Company Postmates (now owned by Uber) has their own robot called Serve which is in early testing. Video of it on the street.
Several other companies have also announced projects. None appear to be near rolling out to live customers though.
Business Model and Markets
At present Starship and Kiwi are mainly targeting the restaurant deliver market against established players such as Uber Eats. Reasons for going for this market include
Established market, not something new
Short distances and small cargo
Customers unload produce quickly product so no waiting around
Charges by existing players quite high. Ballpark costs of $5 to the customer (plus a tip in some countries) and the restaurant also being charged 30% of the bill
Even with the high charges drivers end up making only around minimum wage.
The current business model is only just working. While customers find it convenient and the delivery cost reasonable, restaurants and drivers are struggling to make money.
Starship and Amazon are also targeting the general delivery market. This requires higher capacity and also customers may not be home when the vehicle arrives. However it may be the case that if vehicles are cheap enough they could just wait till the customer gets home.
Still more to cover
This article as just a quick introduction of the Sidewalk Delivery Robots out there. I hope to do a later post covering more including what the technology will mean for the delivery industry and for other sidewalk users as well as society in general.