NZNOG2010 – Day 1 – Session 1

I attended the NZNOF 2010 conference in Hamilton. Notes as below.

Opening

  • Overview by Dean and Jonny on developments, especially about the trust

National Library Webharvest

  • 2nd Harvest planned in 2010
  • Harvest planned for April
  • Material from 1st harvest not yet online
  • Feedback requested on “Notification” , “robots Policy” , “Location of Harvester”
  • Would like feedback on the options paper

WAND Group

  • PMTUD (Path MTU discovery) in ipv6
  • Tested how well this is working
  • Sent ICMPv6 PTB message to hosts and see if remote host changes behavour in response to it (drop from >1280 to 1280 byte packets)
  • Tested 1647 websites (working ones from Alexa top 1 Million sites)
  • Used scamper to test
  • 58% PMTU worked, 34% packets too small ( might be working already, unsure)
  • 5% PMTU failed or no response
  • Working on protocols other than port80
  • Multiple vantage points, Other sources of addresses, web interface to toll
  • Conclusion – PMTUD mostly works – read RFC 4890

Anomaly detection in Networks – Andreas Loft

  • Doing this automaticly is good
  • Several existing tools
  • Nothing very concrete

WAND AMP Project

  • Boxes hosted by ISPs and PCs and sit around pinging each other
  • Good coverage of TelstraClear since ISPs use them as upsteeam, less so for Telecom
  • 1 ping / minute , 10 minute average posted
  • Cute interface to graphs
  • http://www.wand.net.nz -> click on “NZ AMP”
  • Still under development

Shane Hobson – Velocity – Fibre to the home/premises

  • “How to build a Fibre network with a sack full of Government cash”
  • Broadband Challenge Fund $25M
  • Hamilton had 5 companies with some Fibre – Formed Hamilton Fibre Networks Ltd
  • HFN got $3m grant from fund
  • HFN partnered with Velocity Networks
  • 50-60km of Cable around Hamilton
  • Sell layer-2 ethernet services (similar to citylink)
  • Govt Ultra fast Broadband fund of $1500
  • Aim Ultra Fats BB to 75% of NZers
  • 100% of NZers in 25 (or 33) largest towns and cities
  • BB today is 25Mbit on ADSL2 contended to perhaps 250kb/s
  • UltraFats is 100Mb/s+ (50Mb/s upstream) with zero contention on access network
  • Huge amounts of bandwidth potentially ( hundreds of GB/s just for each say Hamilton )
  • ISPs need to decide: Buy Layer 2 or buy dark fibre?
  • ISPs: Different standards/services in different regions
  • ISPs: What content / services ?
  • ISPs: Peer at regional exchanges to reduce haul on Nat links?
  • ISPs: ISPANZ role?
  • ISPs: Caching, CDNs
  • ISPs: Zero rated “on net” traffic , Multicast IPTV, software updates
  • right now Hamilton provider doing:1/3 Dark Fibre, 1/3 L2 within companies , 1/3 to Internet
  • Frustrating to watch City Council digging up ground and not putting down ducts or letting other people do it.
  • Some councils are better
Share

LCA2010 – Day 4

I ended up staying up quite late on Wednesday night so I was a little zonked out on thursday morning.

Keynote – Glyn Moody

  • Interviewed people for “rebel code” , found free software people “very nice” even compared to other people in computer industry
  • arXiv.org setup week before Linux kernel first released (Aug 1991)
  • Overview of public Library of science
  • Human Gnome project – DNA inherently digital
  • Bermuda Principles – finished annotated sequences submitted to public database
  • Jim Kent published and got full human gnome into public domain a short time before Celera finished their work and could have patented everything.
  • open data – data is not published just results – example of recent climate data being released, not a big problem if it had already been in public.
  • open notebook , reqular updates on progress
  • http://en.wikipedia.org/wiki/Open_Notebook_Science
  • History of sharing art – Project Gutenbery 1971  .10 books 1991 , 1000 in 1997.
  • Various free licenses slightly incompatible , hard to convert between, took several goes to get licences correct
  • wikipedia – easy not programmer example of sharing tht people can understand – “open source is wikipedia for code”
  • Open government is more “Shared Source Government” rather than “Open Source Government”
  • Global economic crisis – tragedy of the commons
  • At least the Financial crisis has some winners
  • Very anti financial system, suggest more  “open source” options and commons
  • “if you share stuff you are destrying property, you are taking jobs away from the poor people” – How the debate is being framed

It was noted by one person that this year’s keynotes are more “Freedom” and “High tech”.

Lindsay Holmwood – Flapjack and Monitoring

  • Check – unit test – good bad ugly
  • Monitoring system – monitors for failing checks
  • 3 questions for monitoring systems – next check? , was check okay?, who do we notify? . Fetch , test , notify
  • fetch – lookup
  • test – execute , verify
  • notify – decide , callout
  • traditionally done in single process
  • but it’s an embarrassingly parallel problem
  • parts can be split. fetch+test fetch+notify – pass id/command between
  • precompile checks – so fetch is less expensive
  • transport between processes is the scheduler
  • no data collection when testing (graph seperately)
  • scheduler – workqueue – filled by populator, assigns stuff to notifier and workers
  • Lots of workers can be created (to do test)
  • flapjack – in ruby , talks to nagios plugin format
  • beanstalk – ansyncrnise workqueue service – ubuntu/debian packages
  • beanstalk – producer  puts jobs on beanstalk , consumer takes jobs off
  • uses named tubes (queues) , multiple tubes per instance
  • flapjack-worker – started up by flapjack-worker-manager starts multiple copies on machine. various control commands
  • worker is simple so linear scaling, spread across multiple machines required
  • flapjck-notifier – has manager to start it.
  • notifier has recipients.conf file with list of people to notify
  • notifier.conf – config for various notifiers (MAIL, SMS)
  • APIs – notifiers, filters, systems
  • notifier API – who , when and how sort of stuff.
  • “how many here use puppet – about a dozen – How many use Chef? – none “thanks a shame” “no it’s not”
  • persistence API – store stuff , mysql, couchdb whatever, standard way to store data.
  • filter API – parent checks hierarchy (so don’t check ports if host down)
  • flapjack-admin – pending – nodes , check templates , checks (check template + node ) , batches (group of checks)
  • 3 types of checks
  • Gaugaes – stuff within range – collectd ( point flapjack at collected output )
  • Behavoural tests – cucumber-nagios
  • Trending – reconoiter – growing area
  • collectd – gets stats from anything – nagios bridge – collectd-nagios queries collectd data
  • collectd client – gathers data from node and sends to collectd server
  • collectd forwarding server – agregates, filters and forwards
  • falapjack – crrently gems, soon to be real packages
  • http://flapjack-project.com

Bob Edward – Yubikey authentication in a mid-sized organisation

  • Reusable passwords are dead , hard to remeber, something you know which can be shared and discovered, captured, guessed
  • Alternative – One time Passwords – doesn’t matter if captured.
  • examples – RSA keys, SMS based systems, Yubikey, 2 factor authentication
  • Created by Yubico in sweden, open-source
  • Looks like a USB keyboard to a computer, generates a 44 character OTP each time button is pressed. No batteries, 2st 23 characters fixed for each key
  • $12 each in volumn – $40 as one-off
  • Based on secret AES 128-bit key
  • Yubicoships yubikeys with pre-generated IDs and AES keys. Offer publicauthentication, they know secret 128-bit key, need to trust them
  • secret-id+sess+timestamp+session+rand+CRC  string created by key , then encrypted and public ID prepended.
  • Server decrypts , checks checksums and looks to make sure secret-id matches and session and timestamps are incrimented from previous values.
  • Unless you trust and always want to use Yubicom’s servers you should reprogram you keys with your own keys and IDs. Can’t then be used against Yubicom’s server.
  • weaknesses – requires computer with usb port that accepts usb keyboard – some bugs with 1st generation keys – unused generated keys remian live until the next valid key is used
  • You can run your own server fairly easily – ykaserver – various interfaces, postgress database for storage – can also call out to PAM for two-factor authentication
  • softykey – software Yubikey – can use to generate 1-time pad for stuff without usb keyboard interfaces
  • Tested with ssh, VPNs , web logins – mostly use PAM or LDAP method
  • See Linux Journal and yubico.com

vimperator – automatic launch prog for netbooks

Jan Schmidt – Towards GStreamer 1.0

  • History of dev, faster bits during hackfests, when switched to git etc
  • Overview of last year, switched to git, slowdown when people busyswitched to binary registry
  • Support for various DVD playback  functions, special subtitles etc.
  • I’m not really in this area so I was just listening to get an idea where things are going. A bit too much detail for me at times.

Adam Jackson – The rebirth of Xinerama

  • Once again this was a bit over my head. It does look like the X guys spend a lot of time fighting assumptions built into the protocol and code 10 years ago however.

Stewart Smith et al – Building a Database kernel with Lego Like parts (Drizzle)

  • What would you change about Mysql – Modular architecture
  • Some crazy legacysuff in the Mysql code – good oppertunity to clean
  • move alot of code out of core, especially option parts – understandable and to reduce load – don’t load if you don’t need
  • more code coverage with tests
  • plugin interfaces – protocols, replication , logging, etc
  • modular replication system
  • general refactoring of storage engines
  • “If part of API sucks then fix API rather than work around it”
  • New this week – rot13() powerful encryption
  • Authentication plugins – auth_pam , auth_http
  • Various Logging plugins – logging_query , logging_syslog
  • Drizzle Community – All contributors equally – All project information public – No contributor license agreeements – Release early and often (~2 weeks ) – 100+ contributors , 500+ on mailing list
  • Milestone releases
  • When production release? – waiting to solidfy compatability – Sounds like a few months. – Reliable but still in flux
  • Pacakages to be pushed out to dists once things stable

Afterwards I had some dinner and went to the Professional Deligates networking session.

Share

LCA2010 – Day 3

Wednesday is the first day of Linux.conf.au proper. I thought that today I’d just keep my notes in a blog post to prevent doubling up.

The keynote was Benjamin Mako Hill talked about various things the most interesting bit was “antifeatures”. Things like DRM, crippling of products etc. The one of these I most hate right now is they way that cheap netbooks have fairly low specs (small resolutions, low RAM, slow CPUs ) partially because they have to keep the spec below a certain value in order to qualify for the really cheap Windows license.

The dreamwidth talk was quiet interesting (although the speakers pre-rehearsed banter between the speakers didn’t really work). Lots of practical examples , war stories and good sound advice.

Selena Deckelmann talked about choosing which open source database your should choose. The quick answer is “what problem are your trying to solve?”. She did a survey of the 50-odd databases out there and got 25 replies. Also did her own research and comparisons. Classified DBs into several categories (which I won’t list) such as

  • General Model – Key-Value, OLTP.
  • Distribution model (replication, partitioning, sharing).
  • Memory vs disk (eg keegin g everything in memory only like memcached).
  • HA options, Node failover.
  • Code dev model – Core +modules , Monolithic , Infrastructure
  • Community dev model – Dictator, Feature driven, Small group, A mix

Results at http://ossdbsurvey.org

  • Databases implement each others protocols
  • Need verification that protocols correctly implimented
  • Need tools/test to check things like replication working
  • More connections between projects/people (eg java seperate)

Ted Ts’o – Production-Ready filesystems

  • Hard to make robust. Many different workloads, lots of state, very parallel
  • Hard to balance getting it out with getting it stable enough to be fairly safe to use
  • 75-100 persons-years for filesystem to be production ready.
  • eg zfs around a dozen people , start 2001, announced 2005, shipped 2006, people confident with it around 2008-2009
  • Ext4 renamed from ext4dev at end 2008
  • Ext4 Shipping is some community distributions, soon in some enterprise distributions, widespread adoption 12+ months later
  • Lots of bugfixes still in ext4, most not real-world and picked up by auto-tools or careful checks in weird conditions.
  • Ted: “my other prefered term for Dbench is ‘random number generator’ “
  • Paths like online resize, online defrag that are not regularly tested by users or testers so source of many bugs.
  • Many bugs were in the recently subsystems and features
  • Making General purpose file system takes longer and a lot more effort than you might expect. Labour of love, hard to justify from business perspective.
  • Solid state drives with “flash translation layer” in place are fairly much the same as spinning disks. Extra optimizations for disks don’t help but they don’t hurt

Matthew Garrett on the Linux community

  • Started by listing things he’s not talked about
  • The Linux community is “Like the Koreas”
  • To be a member of the Linux community “you just have to care, just have to turn”
  • As community we are very hostile, it’s seen okay to flame and it is being rewarded still
  • Should we stop just cause it’s a nice thing to do or because it’ll stop scaring people off?
  • Ubuntu code of conduct has mean’t that users are consider part of the community more than in other distributions
  • Code of Conduct must be enforced or it’s useless
  • “We value code above all else… not a good thing” . We need people to feel that by using software they are part of something
  • Communty entirely based on technical excellence or encompasing everybody who users, cares, contributes to projects
  • Idea for positive examples Wiki with pointers to COPs and best practice examples
  • Not gained behavior standards normally associated with grown communities

Sage Weil – ceph distributed file system

  • How different
  • scaleable to 1000s , grow from a few
  • reliable, HA, replicated data, fast recovery
  • snapshots, quota-like accounting
  • Motivation – avoid bottlenecks and symetrical shared disks
  • avoid manual workload partition, p3p-like protocols, intell storage agents
  • POSIX file system , scaleable metadata server
  • metadata (MDS) servers/clusters and object store boxes seperate
  • CRUSH hash function used to distrubtute objects across devices, works as devices are added. Spread them out explicitly across infrastructure if required
  • fast (no lookups), relieable, stable
  • celp object storage daemon on each node
  • talks to peers on other node: rep data, detect failures, migrate data
  • hashing fuction means nodes don’t have to negotiate with each other, CRUSH says where data is going.
  • monitor storage nodes, moves data around, make sure it’s in the right places, uptodate. fixes if required.
  • raw storage API if you don’t need full filesystem fun (dirs etc)
  • proxy that emulates s3 REST interface
  • metadata cluster , uses object store for all long term storage, needs memory and fast network for performance.
  • metadata streamed to journal. large journal (100s MB) flushed now and then
  • snapshotting on per-directory basisi via simple mkdir
  • snapshot leverages btrfs copy-on-write storage layer
  • file systems client near-posix
  • kernel client, FUSE, Hadoop clients
  • stable but not production ready
  • client should be in mainline kernel soon
  • aim to work in multiple datacentre, across unrelieble links
  • http://ceph.newdream.net/

Paul Fenwick – Worlds Worst Inventions

Not really a technical talk. More a few stories about funny inventions. Quiet amusing but I’m not sure it fits in with the rest of the conference.

Share

LCA2010 – Day 1

First real day of Linux.conf.au is always full on anticipation. I woke up a little early and nibbled a small breakfast as I walked from ustay to the venue. After the crap weather on the weekend things were stating to look a bit better.

The signup are at the venue was fairly quite with people being processed quickly and many having been signed up for the weekend.

First up was the Welcome talk which had a few hitches. Due to illness it was being given by and understudy who was a little unpracticed with the delivery and had a problem when the overhead screen went blank for 5 minutes due to technical problems (not sure if it was the screen or the laptop’s fault). Highlights were a 42-below ad for Wellington and everyby singing Happy Birthday to Rusty.

I spent the first couple of sessions at the Haechsen/LinuxChix Miniconf since most of the topics were interesting and for various reasons (mumble mumble) talk times between miniconfs were not sync’d so it was hard to move between them.

It looks like this year the video situation is fairly good. All Miniconfs and main sessions are both being streamed live (although in wma format which caused some comment ) and being record for later download. Hopefully It’ll all work out.

Talks I attended:

  • Version control for mere mortals by Emma Jane Hogbin was a good intro to VCS and practices including a bit aimed at sysadmins and content maintainers rather than just coders. She obviously likes Bazaar a lot more than git. Goods intro and once again I feel guilty about not using it more.
  • Happy Hackers == Happy Code by Sara Falamaki was an overview of what makes programmers happy. Mostly concentrating on tools but with some other bits and pieces mentioned. Great, especially the bit where Sara started throwing (often wildly) lollies to members of the audience who made good suggestions.
  • Through the Looking Glass by Elizabeth Garbee gave here perspective on using open source software and the high-school level. Interesting stuff on tools, and how other teens viewed open source and programming and the scary story about how her school had a rule that any student how bought a computer to school running Linux/Unix would be expelled!
  • Creating Beautiful Documentation from Lana Brindley covered some high level bits of the process redhat uses to create documentation as well as a bit of an overview of what technical writers do and why their jobs rock 🙂
  • Getting you feet wet for Angela Byron gave ways and advice for getting involved with Open source projects ( including the old “woman’s work” (my, not her term)) of documentation etc. Pretty good.
  • Code of our own from Liz Henry was about the first feminist orientation talk of the day. Lots of stories and advice for women in open source as well as a few bits where she gave your low opinion of how well some ideas have worked in practice.

Overall fairly interesting sessions. I noticed that for most of the 2 session the majority of people in the room were male and quite a few of the audience questions/comments were from them. This didn’t really cause a problem for most talks which were on general topics but I noticed the “male perspective” was less useful/welcome for Liz Henry’s talk.

For Lunch I wandered around a little bit an eventually found a place called “The coffee club” where I had a soy milkshake and a pesto bruschetta. Very nice.

For the last session I went to “The business of Open Source” Miniconf and then “Libra Graphics”

  • The 100 mile Client Roster from Emma Jane Hogbin was an interesting overview of the way her business and business model has evolved and where she thinks the next step is. Good talk and delivery although it’s a bit outside my area for me to give a good review of the content.
  • Building a service business using open source software by Cameron Beattie didn’t really appear to me. The talk was a bit flat and delivery lacked much spark.
  • Cheap Gimmicks to Make your designs ‘New’ by Andy Fitzsimon from suffered a bit from technical problems with delivery but looked like there was a good talk in there somewhere that just required a bit more prep.
  • Dynamic PDF reports via XSL and Inkscape by Peter Lieverdink was cool but a little over my head.
  • Inkscape: My Cheerleading Adventures by Donna Benjamin was a little sparse even for a 5 minutes talk

After the end of the day I went along to a Wikipedia Meetup at the Southern Cross Hotel. The Meetup was fairly small ( just 3 other people) but interesting people and several hours of discussion. Some talk about a NZ Wikimedia Chapter and also helping with the Wikimedia stand at the LCA open day.

Last up I grabbed a coffee and cake at Midnight Espresso.

Overall not a bad day, tomorrow will by Sysadmin Miniconf all day wih the Speakers Dinner in the evening.

Share

Hole in nbr.co.nz paywall

Update: NBR have fixed the hole

It looks like the National business review has a hole in their paywall. I don’t know if this is an intentional hole but as at the time I’m posting this it enables people to read articles that are “subscriber only content”.

A sample restricted article by Chris Keall “Did Paul Reynolds collect millions for hitting squishy targets?”  If I browse to it via http://www.nbr.co.nz/article/did-paul-reynolds-collect-millions-hitting-squishy-targets-109070 I get an error message:

Blocked version of article

However if I take the article number ( 109070 )  and access it via the URL http://www.nbr.co.nz/print/109070 I can see the whole article content:

Visible version of atcile

I guess somebody made a little mistake with the way the setup things or possibly this is designed to allow search engines like google to still find and index NBR’s content.

Share

Tech Updates, looking to the future

A few things I’ve been looking at or intending to look at over the next few months.

  • I’ve bought a new computer a couple of weeks ago for home. The computer is intended to replace the house server. The main functions will be as a file server and host for virtual machines. The big changes is that I’ll be switching from Xen to KVM as virtualisation technology.
  • KVM + PXE + Kickstart + Ubuntu  – I really want to build my virtual machines automatically and at the same time to be using a more general machine building method . This page on the Ubuntu site looks like it is a good start and I’ll blog a bit when I get it all done.
  • I need to do some work on Mondo Rescue , I have a bug I reported that is supposed to be fixed and I have to test.
  • GlusterFS is a distributed network file system that looks really cool, I’m intending to play with this a bit.
  • Once again we’ve applied to do a Sysadmin Miniconf at the 2010 Linux.conf.au conference. Once again we hope to have a really good miniconf. However no less that 32 miniconfs have applied for just 12 slots so not sure if we’ll get in. We were really popular last year but personally I’ve no idea what our chances are this year. Bit down about the thought of not getting but I guess whatever happens will happen.
  • I keep getting good ideas for websites and products. Not programming and having poor time control means most of these ideas are probably not going anywhere. Maybe I’ll try a couple of them though. Also got some further ideas for technologies to play with but want to get the ones above sorted first.
Share

Watching processes with monit

I’ve been having a small problem on one of my server with the http daemon dying every week or two. It’s not often enough to be a huge problem or invest a lot of time in by enough of a nuisance to require a fix. So what I ended up doing was installing monit to look after things.

monit is a simple daemon that checks on server resources ( mainly services and daemons but also disk space and load ) every few minutes and sends and alert and/or restarts the service if there is a problems. So after installing the package ( apt-get install monit ) I just created a series of rules like:

check process exim4 with pidfile /var/run/exim4/exim.pid
   start program = "/etc/init.d/exim4 start"
   stop program = "/etc/init.d/exim4 stop"
   if failed host 127.0.0.1 port 25 protocol smtp then alert
   if 5 restarts within 5 cycles then timeout

check process popa3d with pidfile /var/run/popa3d.pid
   start program  "/etc/init.d/popa3d start"
   stop program  "/etc/init.d/popa3d stop"
   if failed port 110 protocol pop then restart
   if 5 restarts within 5 cycles then timeout

for the main processes on the machine. Sample rules are available in the config file and documentation and google is fairly safe as long as you make sure you don’t copy a 10th generation rule of a “Ruby on Rails” site ( ROR components apparently require frequent restarts). All up the whole install and configuration took me around half an hour and I’m now monitoring:

# monit summary

System 'crimson.usenet.net.nz'      running
Process 'lighttpd'                  running
Process 'sshd'                      running
Process 'named'                     running
Process 'exim4'                     running
Process 'popa3d'                    running
Process 'mysql'                     running
Process 'mailman'                   running
Device 'rootfs'                     accessible
Process 'mailman'                   running
Share

Hacking InternetNZ Council Vote

Internetnz is the main New Zealand Internet lobby and policy organisation. More or less they take money from .nz fees and redirect it to benefit the New Zealand Internet and Internet users.

In a few days there is an election for it’s president and council. Following a post by Andy Linton to the NZNOG mailing list about the “need for a strong voice from the technical community” several technical people have put their name forward for council.

Following a discussion on the Internetnz mailing list I realised that many people are unsure of the best way to rank a list of candidates to ensure the “best” result. Looking around I was unable to find a good reference for this online so I thought I’d write a quick post here. I should give the disclaimer that I’m not an expert in this are so possibly I’ve made an error. I’m also only addressing the Council Vote note the President and Vice-President votes.

Voting System

The voting system for Internetnz is outlined here but what it simply means for the voter is that they rank the candidates from 1st to last. For each council seat the lowest polling candidates are eliminated and their votes allocated to the next preference until one has an absolute majority. For the next council seat it happens again except the ballots that had the previous round winner as first preference are eliminated from any further consideration.

You can see what happened last year here . There were 9 candidates, 6 seats and 90 voters. Rounds 1 through 7 show people being eliminated and their votes transferred around until Jamie Baddeley is elected. On Round 9 it starts again but 16 votes have been removed from the pool, these are the people who voted for Jamie as their first preference.

In rounds 9 though 15 the eliminations continue until Michael Wallmannsberger is elected. Then his 16 first preference votes are removed and it starts again until all 6 candidates are elected. The 2006 result is also online .

The interesting thing to notice is that only ballots that put an elected candidate as the 1st preference are eliminated in the first round.  So while the 16 people who voted for Jamie Baddeley helped elect him in the first round they had no influence in later rounds. On the other hand the 22 people who put Neal James, Carl Penwarden, Sam Sargent and Muchael Payne as their first preference got to participate in all 6 rounds of the election.

So what is the trick?

So out of the candidates I would characterise the following people as technical: Lenz Gschwendtner, Glen Eustace, Stewart Fleming, Andrew McMillan, Dudley Harris, Gerard Creamer, Nathan Torkington and Hamish MacEwan. This is eleven out of the 17 candidates running for the four  council seats.

Now assuming that there is a some level of support for technical candidates the worst case would be that all “technical” voters put say Nathan Torkington (to pick a well known name) as first preference. Nathan is elected as the first candidate and then the technical voters have no further influence on the other 3 councillors.

Instead we want to make sure that techie votes elect as many candidates as possible.

So what should I do?

Note: I am using the term “round” below to refer to each council seat election ( 6 in 2008, 4 in 2009 )

If you have a group of voters and a ground of candidates you have two main objectives:

  1. Avoid giving a first preference to a candidate that will be elected in the early rounds so your ballot will participate in as many rounds as possible.
  2. Give enough first preferences to your candidate to ensure they are not eliminated early in each round

The first idea is easy. Don’t give you first preference to a technical candidate. However this is where the second objective comes in, you need to give them enough first preference votes so that they are not eliminated early in every round.

I think the following should work:

  1. Rank all the candidates in you order of preference
  2. Decide how far down the list you are “happy” with the candidates (ie the 11 techies listed above)
  3. Randomly (yes, really randomly) pick one of the acceptable people and put them as your first preference.

The idea now is that if say we have 40 technical people voting then each of the 11 technical candidates will end up with at least 3 or 4 first preference votes. As the lowest ranked of these is eliminated then preferences will flow to the other technical candidates (in order of most popular) . If a technical candidate is elected only around 1/10 of the technical ballots will be eliminated from later rounds so there is still a good chance of electing other candidates.

What could go wrong?

It’s possible than the random allocation of first preferences will result in a popular candidate ( eg Nathan Torkington ) randomly getting a smaller number of first preferences and being eliminated early in every round. I think this is a small risk since

  1. it is likely that popular candidates will get first preferences from other voters
  2. popular candidates will have a higher random chance of people put as first preference since they will be in the “acceptable” list of more techie voters
  3. Even if this does happen others in the slate will still get in.

Feel free to let me know any questions ( or point out horrible errors I’ve made)

Share

What I want in a netbook for 2010

A recent thread about laptops  in the NZLUG list remind me how I’m not 100% happy with the way netbooks are evolving. The problem is that when the EEE came out the idea was that you’d buy a cheap, portable PC  which would do 90% of what people used PC for ( email, browsing, simple documents, simple video and audio).

However the problem is that the portable and cheap seems to be going out the window as the “Netbooks” now cost as much as low end laptops and are getting almost as big. So the big advantages of my existing EEE:

  • Small and light enough to carry in my bag all the time and not notice.
  • Cheap enough that I can not use it for 2 months but not feel like I’ve wasted money
  • Cheap enough that people can give one to their kids and not worry about the kid breaking it.
  • Solid state so I don’t worry about dropping it.

at sort of lost with the new netbooks. Remember how the original EEE ( nearly 2 years ago) was supposed to cost just $US199? That is the sort of price we need so people can buy them as “kids toys”, “play machines, “travel kit machines” , etc.

I’m intending to buy a replacement for my EEE in 2010 ( 3 year replacement cycle), what I’d really like to get would be:

  • Case the same size as EEE70x or EEE90x series
  • Display 1024×768
  • 1GB RAM ( upgradeable would be nice )
  • 8 or 16GB built in flash drive
  • CPU fast enough to play video on full screen
  • Ports: 3xUSB , Ethernet, WiFi, SD-slot, VGA, Sound/Mic , Camera
  • 6+ hours battery
  • Ubuntu standard
  • no more than $US 300

I think having a standard Linux ( I like Ubuntu but that me ) OS that Netbook makers can just install on their machine or that targets a netbook platform would be a big win. Even better if it’s a “full status” version of Ubuntu that gets updates every 6 months or best of all it would be “standard” ubuntu and would “just work” on a smaller machine.

I’m hoping the 3rd (4th?) generation netbooks can be what I want. The 1st generation was just getting something out there ( EEE 701 ) , the second was upping the spec as people demanded more while I hope with the 3rd that the performance is now “good enough” and the cost and size can be shrunk back down again.

Share

A week of Twitter

So about about a week ago I signed up a twitter account and started micro-blogging . I’m on 89 updates which is around a dozen a day although this week things are busy with the Blackout Campaign against Section 92a of the new Copyright Act so in a typical week their will probably be less (especially when the novelty wears off). If possible I’m trying to make tweets that might be of interest to other people  especially doing things like links to good articles which in the past I sometimes posted to the main blog.

Following people is interesting, for now I just look at the last page that somebody has posted and if it looks interesting (on average) I’ll add them. So I’m following around 50 feed so far and I’ll see how it goes, but since on average the impact of each feed is less than a RSS feed ( ie I’ll usually not scroll back to stuff I missed overnight) I’m not overwhelmed yet.

So far I am using the web interface a bit (which is good for looking at people, their followers and history), twitux at home and twitterfox at work. If you go to my actually blog website you’ll see I’ve added a RSS feed of my tweets (it only updates every hour or so) and I added the Twitme wordpress plug so every time I post to my blog I tweet is sent.

Earlier today I was inspired by nzpolice feeds that Sam Sargeant created and decided to create my own. So I’ve made the nz_quake twitter bot which updates whenever a new Earthquake is reported on the geonet website. The actual bot is just a shell script that checks every few minutes to see if the status page has changed and if it points to a new earthquake (they have unique IDs) it’ll just use curl to connect to twitter’s simple web interface.

It took me around an hour of playing around to implement in around 35 lines of shell script. I’ll have to wait a few days to see how well it works since the webpage only reports the earthquakes that people might have felt rather than every tiny little one.

Share