DevOpsDownUnder – Day 2 – Load testing

Notes from the Load testing Session on Sunday.

Lonely Planet

  • Need tests developed as features are added
  • Moved tests to jmeter
  • Anyone can run and develop tests ( previous just one test guy who had to write and run tests for 3 agile teams)
  • Load test are integrated and tested as part of continious integration

Atlassian

  • test important paths and functions daily
  • Emails test person and author of recent commit if things (usually load times) get worse
  • Graphs of speed of pages

Demo of Cloudtest by one of the participants. Looks amzing (although quite expensive I guess)

Suggested vendors / Products

  • neotest
  • cloudtest
  • siege
  • browsermob ( uses Selenium)
  • newrelic
Share

DevOpsDownUnder – Day 2 – Flex your test

Tim Moore – Atlassian

What do we do wrong now?

  • Continuous tests? , Against production environment?

Writing Flexable tests

  • Current test make assumptions about enviroment (each to recreate DB etc)
  • Sample app ( twitter clone (tooter) ), in java, maven to google apps)
  • http://github.net/TimMoore/tooter , tooterapp.appspot.com
  • Tests in “htmlunit” to perform tests again the actual website
  • Logs in checks elements on page. Test from logging in, make sure simple errors on logging in are picked up and right error pages produced.
  • First refactor was to generalise the tests to able to reuse things like “login to site as user XXX,  password YYY”
  • 2nd refactor puts everything in a seperate class. Low level things like login, host, html library all have their own class. Each to change
  • Keep data in production database. Have test users or organisations to test against. Perhaps don’t confirm user signups so new users not created each test. Create “test groups” that test users are members of so changes only affect those test accounts, leverage A/B testing infrastructure
  • Other tools include cucumber
Share

DevOpsDownUnder – Day 2 – Smart Grid

Joel Courtney from Energy Australia (cover parts of NSW)

  • Fundamental business of power not changed in 100 years. Compared to technology and telecommunications
  • Very high reliability requirements
  • 60-80 year old infrastructure in places
  • instrumentation limited as to home close it goes to customers. Only to distribution stations (30k of these) , sometimes substations
  • Rolling out 4G and other wireless networks – priority is coverage not speed
  • IP over power not viable solution
  • Need to do something with all this data

System

  • Need to integrate GIS and other information about state of network to make sense of raw numbers and how they relate to each other
  • decided to go with web based system since clients had at used web. Browsers on desktops etc
  • Web based systems more mature these days
  • Lack of expertise in IT space
  • Using GIS info and maps based apps to plan rollouts ( reduce tralvel distances)
  • Some inspiration from flickr ( url structure, deep linksing,  simple relationship between assets)
  • Everyblock ( Integrating and agregating information on geographical basis)
  • Twitter (asset reporting of event on timestamp, everything else is on top that)
  • tumblr ( similar to twitter, move through streams of events)
  • Google maps ( simple intergration, visualisation, assets in real world

ION

  • Application for monitoring
  • 750 sub-stations currently
  • Previous systems used polling (didn’t scale)
  • Push messaging
  • Operators at this point able to get better locations of faults than in current systems
  • Huge increase in information being collected (every minute vs once every few months in some cases)
  • able to do overlay on google maps for assets (putting 30k on one map pushes google API). Percentage utilsation
  • Some cases substation in remote areas only have a couple of customers. Not worth deploying especially since next generation metering will provide information
  • Looking at future integration once smart meters rolled out. But data from this still be decided by power regulator. Strong privacy concerns.
  • Mostly event related browsing now
  • Sorter term pushing up into transmission space
  • Interested in releasing more information to public. Need input on formats , interaction.
Share

DevOpsDownUnder – Day 2 – John Ferlio

John Ferlio – Commit Early, Deploy Often

Flickr claim to deploy multiple times per day, not for everybody

Various ways to deploy, tarballs, versions control, packages

Usaing Package management to deploy

  • Treat internal apps same as external apps
  • Works with config management
  • Use CM and VMs to give devs a sample of production on their laptop
  • Devs can then develop directly to what prod looks like

Example for Deploy for Ruby Apps

  • Use bundle to setup ruby dependencies for app
  • Simple makefile to install install bundle and run it. Bundle installs apps and then compiles them. Then files put in right dirs (prod and stating versions)
  • dh_make to create base deb env
  • delete unwanted extra debian template files.
  • create staging and a production package, basic fill in of debian control files to buuild packages
  • Package contains whole website and files. Maybe 200MB. More advanced users might want to split some stuff off.
  • He has little bit to put the bzr revision in a file in the package for later reference.

Lauchpad PPAs – lets you build your source for various debian distributions/archetectires. He Blogged about a simplier system he created (you can host locally) this a month or two back.

Use to deploy rubu-on-rails and WordPress apps.

  • Don’t have to compile things on box, don’t have to install gems, don’t have to svn update.
  • Maybe use pre-init script to stop app server during the upgrade nd restart afterwards
  • Some talk about how to sync deployment of updated app and new database schema that it requires. Thoughts seem to be that new app version should be able to handle the old schema during the transition.
  • Rails libararies evolve so fast which is why libraries need to be bundled. Other languages can sometimes use OS provided libraries
Share

NZ National Party claims copyright of Diplomat’s photos?

Last night I posted about how copyright of photographs of New Zealand Prime Minister John Key are owned by the New Zealand National party. In the post I assumed that the photos that were being claimed by National were taken by political staffers in the Prime Minister’s office.

However looking though the a New Zealand Herald’s photogallery of a John Key’s recent visit to Washington I noticed that the photos 5 and 6 in this gallery match these two photos on the “National Party” flickr page.

However the photo’s in the Herald are credited to “Tania Garry” who appears ( from a bit of googling) to be a career New Zealand diplomat currently posted to Washington.

So what is happening here? A New Zealand diplomat takes a photograph which is released to some news media (with full rights to publish it commercially I assume) but somehow the National Party is allowed to put it on their website/flickr under their own conditions?

So who owns the copyright for this photo? Who released a copy under what conditions and to who?

From my point of view it appears that a NZ diplomat takes a photo and then it’s made available to “friendly” news media for publication (but not the New Zealand public) before copyright is claimed by the National Party?

Like I said in my previous post, photos and other material like this should not be claimed by Political parties but should be released under a liberal license for use by anyone (which includes commercial use like newspapers).

Share

Who owns John Key’s history?

The current Prime Minister of New Zealand is John Key , he’s a nice (well most people say so) guy who leads the right-of-centre National Party in parliament.

As a 21st century politician he has staff members who look after a twitter feed , he has a video blog on youtube and photos from his activities go on flickr.

The problem is that the copyright of images and videos of John key taken in the course of his official duties don’t appear to belong to the country or even be released into the public domain but are in fact claimed by The National Party.

Presumably this claim comes about because the persons recording the material are politically appointed staffers (although the salary is paid for by the New Zealand taxpayer) and they in turn have given their copyright to the National party (hopefully this is a formal arrangement and not some ah-hoc thing).

The problem is that while some photos and other material are posted to flickr under a restrictive license (which I’ll admit is more than previous PMs appear to have done) ownership and control of the material resides with a political party rather than the public.

So while the Whitehouse flickr stream allows photographs to be downloaded, reprinted and used in websites and media a photograph of John Key meeting the US Agriculture Secretary is locked down to Attribution-Noncommercial-No Derivative Works . Even worse attempts to contact the National Party Flickr person and ask if specific photos can be used have been universally rejected.

So my big questions are:

  1. What are photos of the Prime Minister performing his official duties, taken by staff members owned by a political party rather than the government (or the people)
  2. Why is used of the photos so harshly restricted?
  3. How does it help promote New Zealand and New Zealand culture when photographs of our politicians can’t be used or reproduced by sites such as wikipedia?
  4. What happens in 50 years when future historians want photographs of our politicians and they don’t exist or the ownership is unclear since they were not correctly transferred to the National party (if it still exists) by the original photographer or got lost at some point?

But really all I’m after is for the photographs to be released under a more liberal license much like the photographs from the Whitehouse. As a New Zealander I really shouldn’t have to wait till a New Zealand politician meets the US president before a free photograph of him is released and our PM’s wikipedia article doesn’t have him wearing a Green Tie.

Share

NZNOG 2010 – Day 2 – Session 4

Metro WDM for the fiscally prudent – Simon Blake

  • CWDM – Split into various bands – uncooled lasers –
  • Single mode fibre – G.652c ideally – coloured optics – components
  • DOM/DDM support (SFF-8472) – query SFP and see what signal level it’s getting (over or under strength )
  • 1-8 Channel MUX/demux – 8 channels 1471-1611 over a pair of fibre
  • Cisco 8 port mux/demux $6k/end ,
  • ebay 8 port mux/demux $800-1000/end
  • Direct import 8 port mux/demux $US 550/end
  • 2 x 10GE on one pair – 2 channel 1310-1550 CWDW splitter (mux not a splitter) – $40 kit on direct import – vs numbers above
  • 1x10GE on single fibre- optical circulators $NZ 1000k , $US14 imported
  • 6 node network, 4 dark fibres – $27K
  • Trying to solve problem with lots of small hops, upstream building losing power (unpowered gear)
  • Pros: Multiprotocol, Perf/Security/reliability
  • Cons: Short Haul (sub 120km) , only 18 channels , Doesn’t do >10GE per channel, You need fibre
  • Direct Import Pro: cheaper , especially in bulk – design flexability
  • Direct Import con: No support except swaps – Freight – Language/Culture chellenges
  • traps – Waterpeak , Wideband receivers , Near end reflection , Avaibility of 10GE optics – DOM (ask for it) and untrusted optics – Measurement equipment/Circuits recording – Link Budgets and insertion loss

Monkeying around on the APE – Michael Jager

  • Plug in new port at APE and found things very open
  • PAcket sniffer + APE – should see boracast and traffic desinted for me
  • What did sniffer see – lots of APE for non-APE address space – DHCP
  • Borrowing transit – see how many networks will accept packets – 46 out of 75 will accept frame from unknown address detinated for their MAC
  • 3 ports provide proxyarp for random address
  • How many networks have an interface in your mngt network?
  • 6 will accept for 192.168.1.254
  • Customer can try and grab as many packaets as possible across cheap APE link rather than expensive transit link
  • Possible things untried – ARP spoofing – responding to un-answered ARP requests (old BGP session of removed neighbour ) – respond to DHCP requests
  • Speaking OSPF to OSPF-speaker – sending TCP RSTs – sending IPv6 RAs and answering IPv6 RS (like DHCP but for v6)
  • Read IM2tubes slide from Jonny and Philip’s slides from Monday
  • AMS-IX configuration guide
  • Don’t take packet from IXP if you arn’t expecting it
  • Don’t announce IXP network from anywhere
Share

NZNOG 2010 – Day 2 – Session 3

Announcement at start of session that Telecom New Zealand now has an official Interconnect/Paid Peering Policy and Contact. Details to be Published. Ask Greg from Telecom for help.

Internetnz Update – Jordan Carter

  • General updates and new structure, new CEO
  • 4 main areas ( Openness, rights and responsibilities, security)
  • IPv6 Task force , replace steering group
  • Copyright – replacement policy looks better, but sneaky changes might come back
  • ACTA – Key concern , lack of transparency, http://www.acta.net.nz
  • DIA filtering – voluntary and uses BGP . Give webpage, can report false alarm
  • Filter – only http, erodes end-to-end , privacy concerns , might be later abused (scope creep)
  • Filter – Send signal that “The government has made the Internet safe”
  • Internet opposed – DIA unhappy with that angle
  • Fibre Stuff – “Last day for 1.5 billion lolly scramble”
  • Regional Networks or one big National Network
  • Hard to tell what will happen – Similar exercise in Aus and Govt went back to drawing board
  • What happens to International Bandwidth?
  • Please join, followon twitter http://twitter.com/internetnz

APNIC update and much more – Elly Tawhai

  • Over 2000 members
  • 1400+ monthly helpdesk enquiries ( 55% growth since last year)
  • Allocations around 100 per month
  • Various Policy changes coming up – Prop-050 (xfering address space ) , Prop-073 (sinple IPv6 allocations – 1 click) , Prop-074 (32 bit ASNs treatment same as 16 bit ones pushed back a year) , Prop-075 (recover historical ASNs)
  • Policies under discussion – Prop-78 ( Final /8 , only people deploying ipv6) , Prop-079 (abuse contact info in objects ) , Prop-080 ( Removal of IPv4 prefix exchange policy )
  • Several more allocation policies in pipeline
  • Recent Survey leading to priorities
  • Various my.apnic updates (web services even), support of research
  • More DNS root servers (Taiwan , Mongolia)
  • Please Participate

RIPE News – Tools and news – George Michaelson

  • RIPE used to be a research place and then became a RIR. RIPE labs is a return to the past
  • http://labs.ripe.net
  • Platform to test and evaluate new tools, feedback cycle
  • INRDB – big cloud of assignments, table dumps, dumps
  • Resource explainer
  • Various measurements , visualisation and links to tools. DNS reply size tester
  • Why – fast turnaround, engagement, no service g’tees

IPv6 flow chart – Nathan Ward

  • Make decission which IPv6 or IPv4/Ipv4 translation technology you should use
  • Tunnel Broker, 6to4, 6RD, Teredo, Dual stack lite, Double NAT, Dual stack
  • Other stuff that I wasn’t paying attention two
  • IPv6 addressing schemes
  • Sparse allocations
  • gives a sample which I won’t copy, look at his slides
  • Customer assignmesnt. Nathan likes /56s or RFC recomended /48. Take your pick

Andy is Curious – Andy Linton

  • Are Universities turning out the right people?
  • Good at turning out applications programmers not systems programmers
Share

NZNOG 2010 – Day 2 – Session 2

DNSSEC at the root zone – Joe Abley

  • ICANN – Manges the Ket-signing-key (KSK) – accepts DS records from zone operators – sends update to DoCfor auth and to veriSign for implimentation
  • DoC auth changes and Verisign impliments the change
  • New process has Verisign signs the keys. V gets a few weeks of of KSKs that Doc signs in batches beforehand
  • DNSSEC Practice Statement – describes procedures, currently drafts
  • Around 20 Community Trusted Representative ( TCR ) have an active roll in the mangement of the KSK
  • 2 copies of the Keys, west coats and east coast. Plus distributed backup
  • “ceromony” for each step in procedure, required what you do and how many people and which people are present.
  • Similar to what x.509 CAs do
  • KSK is 2048 RSA key rolled every 2-5 years ( RFC 5011 but not all have that support) –  Signature using SHA-256
  • ZSK is 1024 RSA key – signed with NSEC – rolled 4 times year – Signature is SHA-256
  • Time cycle every 90 days – ZSK overlap of a couple of weeks
  • Root trust Anchor – published in XML document with constant URL – plain DNS record – PKCS#10 cert CSR , as self signed pub key, signable by others if they want
  • DO=1 part of EDNS0 – says client wants DNSSEC – many clients set bit even though most won’t really want them right now – will cause all queries to jump in size
  • Hard to sign root and then rollback
  • Staged deployment – Start servering DNSSEC for 1 root server at a time – L-Root first, then A, then the others with J last
  • DURZ – Unverifiable key published as placeholder
  • Measurement – Packet captures , diologue with operators – wide range of pre-testing with various software – test with clients that drop large packets
  • DS change requests – TLD procedure to be decided – DS requests 1-2 months before zone published
  • http://www.root-dnssec.org
  • Timeline – Test key signing Dec 2009 – Jan 2010 . Jan – July 2010 roll out signed roots . July 2010 Full Production
  • Lots of documentation on website
  • Indication of big jump in tcp queries presumably because udpreplies are too big

ENUM – Jay Daley

  • Why Doesn’t telephony work like email?
  • Email you choose how to published your email record, where to host, what emails to accept, can outsource, totally in control
  • So IP telephony should be easy too?
  • Unfortunately not
  • Non site-local numbers MUST go to telcoto get delivered
  • Missing – single , global directory linking telephone nmbers to voip numbers
  • This is ENUM . Telephone Number -> Domain Name – Simple Algorithm – e164.arpa – 04 931 6970 -> 0.7.9.6.1.3.9.4.4.6.e164.arpa
  • Won’t be typed, Translation done by a device – people still type out over fashon numbers
  • Register your number, create zone. Add NAPTOR records to DNS zone. Special records to specifiy endpoints (usually sip records), receive calls
  • NAPTO records do interesting stuff . eg “dig +short nsrs.tel naptr”
  • how? Option 1-  enable on your VOIP PBX that is internet connected
  • Option 2 – on session border controller – “enterprise”
  • Option 3 – ENUM proxy ( if existing SBC doesn’t handle enum)
  • Registration process – not same as for domains since numbers already registered – needs authentication
  • Various methods of authentication in different places
  • No ENUM in NZ . Available in UK, Holland, Ireland, Germany, Austria but not significant takeup
  • Reasons for lack of takeup in those countries – lack of mindshare – hostility from telcos
  • Why not in NZ – TCF 2006 report – Privacy issues (but only publish what you like) – Emergancy services access (no idea where callers are) (but all VOIP has problem ) – Polcy/Goverance – “Carrier Issues”
  • ENUM isabout control – movingit from carrier to you
  • Key users – Call centres , ENUM instead of 0800 – Large supply chains (mandate VOIP ) – Multiple sites , simplyfy provisioning
  • Won’t happen without demand
  • “On the Internet voice is just another application”
  • Significant political and commercial resistence from Telcos

Day in the Life of the Internet – Sabastian Castro

  • 4 years of DNS data
  • DITL motivation – network measurement – collection of data from DNS root servers – yearly since 2006
  • More and more root servers, Alt root servers, gTLDs etc passive traces, 48-72 hours
  • concentrate on root server data
  • Pick best 24 hours out of total window
  • 4-8 billion queries, 3-6 million unique clients – sm5-12% recursive queries
  • Mostly A queries, AAAA increasing due to gluerecords being added (why are IPv4 clients sending AAAA queries when they probably won’t/can’t use)
  • 70% of clients are EDNS are capable ( 90% of these are D0 enabled )
  • However clients sending lots of of queries (probably broken) have good support – But clients that query less have lover level of support
  • 10 invalid TLDs represent 10% of queries ( .local , .localdomain , wpad , invalid , home , belkin , corp , lan )
  • Impossible to track down
  • Most queries from NZgoing to Auckland root and Brisbane root but some going to overseas servers (those might be use simple round-robin picking)
  • Lessons – Data collection is hard – clock skew , dat loss , wrong command line options , bad network taps
  • Data management – moredat , more participants – more formats – big effortto normalize data , fill gaps , fix clock skew .
Share

NZNOG 2010 – Day 2 – Session 1

Lightning Talk

  • Geoff Huston – Stateless TCP and DNS
  • TCp limitations – Rough a high load
  • UDP Limitations – Requires IP fragmentation
  • Problems when response bigger than MTU , Fragments of UDP IPv6 often dropped. Switching to TCP drives up load again
  • Simulate UDP with TCP – do minimal crappy respose to fill headers
  • Ignore options, server doesn’t retransmit, ignore anything else from client, just closes connection
  • No reliability, No Flow Control, bad Idea but seems to work
  • Olof Kasselstrant – IXOR
  • Small IX in Malmo and Copenhagen (2nd site being looked at)
  • DIX only IX in Denmark
  • Sponsors for Fibre and Equipment
  • Exchange in 2 countries. Does it affect “must peer in 4 countries” agreement.
  • Dream to be in 4 sites soon
  • CCIP – Barry Brailey
  • Getting out of rewriting Microsoft patch notices
  • “investigation and analysis” function being dropped
  • Infomation and Alerting – website , newsletter, alerts – alerts targetted and highish threshold –
  • Outreach and partnering – main function – lease with overseas certs – talk to various groups – Education: presentation, newsletters, exercises (CyberStorm III – volenteers )
  • Security Information exchanges – Various groups – traffic light protocol – Looking at some new forums – Maybe ISP SIE
  • Cloud Computing for Service Providers – Richard Wade
  • As a service provider – should I care?
  • Infrastructure Foundation (Cisco, EMC, HP)
  • Infrastructure as a service (Amazon , Sun , Savvis )
  • Platform as a swervice (Amazon, MS Azure )
  • Software as a Service ( Salesforce, Google apps)
  • Integrate mngt ( network, servers, hypervisor, storage ) – unified fabric
  • Why and Why Should I care
  • Customer Ads – Eliminate Capex – Reduce Opex – IT as a utility
  • Customer Probs – No LAN apps (overseas often) – WAN now biz critical – Operational relationship with overseas provider – Legal jurisdiction of data
  • Service Provider ads – Understand managed services – Existing datacentres and infrastructure – OSS , process staff and contacts – SLAs – Domestic provider
  • Sp Probs – Managed cust revenue declining – Race to bottom? – Increase International transit – High expectations of quality and relaibility
  • Lame aternative IX Update technique – Simon Blake
  • New system to update filter lists for IXs
  • Citylink can instead download list of networks from customer URL
  • Pulls list daily
  • If diff email for confirmation or action it immediately
  • ALTO – LLyod
  • Helping p2p users select local/nearby peers
  • GeoIP and anycasting – rough
  • ALTO allows ISP to provide application, localtion, routing information, charging information, performance.
  • ISP puts on network some servers (itrackers) that deliver to p2p client the policy information
  • p2p caches (very close to edge) can be advertised
  • No currently in use in the wild
  • IPv6 taskforce – Dean Pemberton
  • Internetnz+ MED
  • TechSIG – 3 Hui in 2009 – Aimed at CIO/CTO – Went really well
  • Looking at more training (session in 2009 already)
  • Other things Task Force can do?

Building a Datacentre for less than $1 million – Gerald Creamer

  • When it’s your own money you care so much more
  • Had to move datacenter to another building
  • Short is that you can’t do it for less than $1m
  • Significant cost areas – Physical – power – cooling – network – time
  • The right building – 18 m search – 100 sites looked at – 7 sites investigated – 4 site due diligence
  • Engineers – “consultation” vs “converstaion”
  • First culling – all concrete – Not ground , not top floor – Strong 5kPa – high stud – no sprinklers – built between 50 and mid-80s – CBD fringe
  • $400 per m2 to strength building
  • 2nd culling – close to street transformer – shorter power cables runs in building – shorter pipes for colling – outdoor space – generater space – near data networks
  • Useful – friendly landlord – nice bank – recession (kean landlord)
  • Save money – quality pre-owned hardware – “free” stuff – Ask experts – do some stuff yourself – Get experts to do others
  • Cables up abandoned lift shaft
  • 2nd hand generator – not as large as final requirement but bigenough for current build
  • Room to upgrade UPS, generator, cables and space spec’d for more
  • domestic meters to measure power in each rack
  • Process Coolers (cheaper) 28KW each $1500/KW cost – $70k of aircon for $7k – check serial number with manufacter to find product history
  • Seismic Bracing – $30k
  • Helped corps clear out datacenters they were moving out of rooms ( “make good” on leases) and picked up some equipment
  • Citylink and Telstra provisioned fibre. Telecom less helpful.
Share