I skipped a couple of talks to do Hallway track and other stuff
Koha – not your average library system by Aleisha Amohia
Name because software was made open source as a gift to the community
Started in 1999
First fully web-based opensource library system
Bugs and external patches soon after
Customizable and Configurable
Used in 18,000+ libraries
It is just a big database
Can be used as not just a library system
Can be used to catalog other stuff at organisations other than libraries like documents
Configurable via CSS, fonts, languages, CMS, feature toggles, etc
Customisable views for each branch are possible
Special Beyond the code
Offline circulation
Supports non-ascii characters
Translation capability
Is it harder to find people to work on stuff since it is writter in perl which is effectively a legacy language? – Has a good onboarding and support for devs and things still work
What are challengers with it being open source? –
People worry about quality of OSS. Fix: Good robust quality procedures
Think it is free – Have good support that is worth paying for
DB backend – MySQL and MariaDB
The circle of life: The Digital Skills GitBook project by Sara King
Working on project for last 5 years that is in the process of winding up
Developing in the open, building a product with our users by Toby Bellwood
The Lagoon Story
At amazee.io . Is Lagoon Lead
What is Lagoon
Application to Kubernetes (docker build for customer, converts to k8s)
Docker based
Based on git workflows. Mostly Drupal, WordPress, PHP and NodeJS apps
Presets for the extra stuff like monitoring etc
Why
Cause Developers are too busy to do all that extra stuff
and it means Ops prefer if it was all automated away (the right way)
8 full-time team members
Knows a lot about application, not so much about the users (apart from Amazee.io)
Users: Hosting providers, Agencies, Developers
The Adopter: Someone using it for something else, weird use cases
Agencies: Need things to go out quickly, want automation, like documentation to be good. Often will need weird technologies cause customers wants that.
Developers: Just want it stabele. Only worried about one project at at time. Often OS minded
User Mindset
Building own tools using application
Do walking tours of the system, recorded zoom session
Use developer tools
Discord, Slack, Office Hours, Events, Easy Access to the team
Balance priorities
eg stuff customers will use even those Amazee won’t use
Engaging Upstream
Try to be a good participant, What they would want their customers to be
Encourage our teams to “contribute first”. Usually works well
Empowering the Team
Contribute under your own name
Participate in communities
How to stay Open Source forever?
Widening the Core Contributor Group
Learn from others in the Community. But most companies are not open sourcing the main component of their business.
Unsuccessful CNCF Sandbox project
Presenting n3n – A simple Peer to Peer VPN by Hamish Coleman
How to compares to other VPNs?
Peer to peer
NAT piecing
Not all packets need to go via the server
Distributed ethernet switch – gives extra features
Userspace except for tuntap driver which is pretty common
Low deployment requirements, easy to install in multiple environments
Relatively simple security, not super secure
History
Based off n2n (developed by the people who did ntop)
But they changed the license in October 2023
Decided to fork into a new project
First release of n3n in April 2024
Big change was they introduced a CLA (contributor licensing agreement)
CLAs have problems
Legal document
Needs real day, contributor hostile, asymmetry of power
Can lead to surprise relicencing
Alternatives to a CLA
Preserving Git history
Developer’s Certificate of Origin
Or it could be a CLA
Handling Changes
Don’t surprise your Volunteers
Don’t ignore your Volunteers
Do discuss with you Volunteers and bring them along
Alternatives
Wireguard – No NAT piercing
OpenVPN – Mostly client to Server. Also Too configurable
Why prefer
One simple access method (Speaker uses 4x OS)
A single access method
p2p avoid latency delays because local instances to talk directly
Goals
Protocol compatibility with n2n
Don’t break user visible APIs
Incrementally clean and improve codebase
How it works now
Supernode – Central co-ordination point, public IP, Some access control, Last-resort for packet forwarding
Communities – Nodes join, form a virtual segment
IP addresses
Can just run a DHCP server inside the network
Design
Tries to create a full mesh of nodes
Multiple Supernodes for metadata
Added a few features from n2n
INI file, Help text, Tidied up the CLI options and reduced options
Tried to make the defaults work better
Built in web server
Status page, jsonRPC, Socket interfaces, Monitoring/Stats
Current State of fork
Still young. Another contributor
Only soft announced. Growing base of awareness
Plans
IPv6
Optimise encryption/compression
Improve packaging and submit to distros
Test coverage
Better NAT piercing
Continue improve config experience
Selectable tuntap drivers
Mobile phone support hoped for but probably some distance away
Speaker’s uses for software
Manage mothers computer
Management interface for various servers around the world
From the stone age to silicon: The Dwarf Axe guide to the evolution of technology by Steven Ellis
What is a “Dwarf Axe” ?
Snowflakes vs Dwarf Axes
It’s an Axe that handled down and consistently delivers a service
Both the head ( software ) and the handle ( hardware ) are maintained and upgraded separately and must be maintained. Treated like the same platform even though it is quite different from what it was originally. Delivers the same services though
Keeps a fairly similar services. Same box on a organisation diagram
Home IT
Phones handed down to family members. Often not getting security patches anymore
Enterprise IT
Systems kept long past their expected lifetime
Maintained via virtualisation
What is wrong with a Big Axe?
Too Big to Fail
Billion dollar projects fail.
Alternatives
Virtual Machines – Running on Axe somewhere,
Containers – Something big to orchestrate the containers
Microservices – Also needs orchestration
Redesign the Axe
The cloud – It’s just someone else Axe
Options
Everything as a service. 3rd party services
Re-use has an end-of-life
Modern hardware should have better )and longer) hardware support
Ephemeral Abstraction
Run anywhere
Scale out not up
Avoid single points of failure
Focus on the service (not the infra or the platform)
I am in the middle of upgrading my home monitoring setup. I collect metrics via prometheus and query them with grafana. More details later but yesterday I ran into a little problem that crashed one of my computers.
Part of the prometheus ecosystem is node_exporter . This is a program that runs on every computer and exports cpu, ram, disk, network and other stats of the local machine back to prometheus.
One of my servers is a little HP Microserver gen7 I bought in late-2014 and installed Centos 7 on. It has a boot drive and 4 hard drives with data on it.
I noticed this machine wasn’t showing up in the prometheus stats correctly. I logged in and checked and the version of node_exporter was very old and formatting it’s data in an obsolete way. So I download the latest version, copied it over the existing binary and restarted the service…
…and my server promptly crashes. So I reboot the server and it crashes a few seconds after the kernel starts.
Obviously the problem is with the new version of node_exporter. However node_exporter is set to start immediately after boot. So what I have to do is start Linux in “single user mode” ( which doesn’t run any services ) and edit the file that starts node_exporter and then reboot again go get the server up normally without it. I follow this guide for getting into single user mode.
After a big of googling I come across node_exporter bug 903 ( “node_exporter creating ACPI Error with Kernel error log ) which seems similar to what I was seeing. The main difference is that my machine crashed rather than just giving an error. I put that down to my machine running fairly old hardware, firmware and operating systems.
The problem seems to be a bug in HP’s hardware/firmware around some stats that the hardware exports. Since node_exporter is trying to get lots of stats from the hardware including temperature, cpu, clock and power usage it is hitting one of the dodgy interfaces and causing a crash.
The bug suggest disabling the “hwmon” check in node_exporter. I tried this but I was still getting a slightly different crash that looked like clock or cpu frequency. Rather than trying to trace further I disabled all the tests and then enabled the ones I needed one by one until the stats I wanted were populated ( except for uptime, because it turns out the time stats via –collector-time were one thing that killed it ).