DevOpsDownUnder – Day 2 – Load testing

Notes from the Load testing Session on Sunday.

Lonely Planet

  • Need tests developed as features are added
  • Moved tests to jmeter
  • Anyone can run and develop tests ( previous just one test guy who had to write and run tests for 3 agile teams)
  • Load test are integrated and tested as part of continious integration

Atlassian

  • test important paths and functions daily
  • Emails test person and author of recent commit if things (usually load times) get worse
  • Graphs of speed of pages

Demo of Cloudtest by one of the participants. Looks amzing (although quite expensive I guess)

Suggested vendors / Products

  • neotest
  • cloudtest
  • siege
  • browsermob ( uses Selenium)
  • newrelic
Share

DevOpsDownUnder – Day 2 – Flex your test

Tim Moore – Atlassian

What do we do wrong now?

  • Continuous tests? , Against production environment?

Writing Flexable tests

  • Current test make assumptions about enviroment (each to recreate DB etc)
  • Sample app ( twitter clone (tooter) ), in java, maven to google apps)
  • http://github.net/TimMoore/tooter , tooterapp.appspot.com
  • Tests in “htmlunit” to perform tests again the actual website
  • Logs in checks elements on page. Test from logging in, make sure simple errors on logging in are picked up and right error pages produced.
  • First refactor was to generalise the tests to able to reuse things like “login to site as user XXX,  password YYY”
  • 2nd refactor puts everything in a seperate class. Low level things like login, host, html library all have their own class. Each to change
  • Keep data in production database. Have test users or organisations to test against. Perhaps don’t confirm user signups so new users not created each test. Create “test groups” that test users are members of so changes only affect those test accounts, leverage A/B testing infrastructure
  • Other tools include cucumber
Share

DevOpsDownUnder – Day 2 – Smart Grid

Joel Courtney from Energy Australia (cover parts of NSW)

  • Fundamental business of power not changed in 100 years. Compared to technology and telecommunications
  • Very high reliability requirements
  • 60-80 year old infrastructure in places
  • instrumentation limited as to home close it goes to customers. Only to distribution stations (30k of these) , sometimes substations
  • Rolling out 4G and other wireless networks – priority is coverage not speed
  • IP over power not viable solution
  • Need to do something with all this data

System

  • Need to integrate GIS and other information about state of network to make sense of raw numbers and how they relate to each other
  • decided to go with web based system since clients had at used web. Browsers on desktops etc
  • Web based systems more mature these days
  • Lack of expertise in IT space
  • Using GIS info and maps based apps to plan rollouts ( reduce tralvel distances)
  • Some inspiration from flickr ( url structure, deep linksing,  simple relationship between assets)
  • Everyblock ( Integrating and agregating information on geographical basis)
  • Twitter (asset reporting of event on timestamp, everything else is on top that)
  • tumblr ( similar to twitter, move through streams of events)
  • Google maps ( simple intergration, visualisation, assets in real world

ION

  • Application for monitoring
  • 750 sub-stations currently
  • Previous systems used polling (didn’t scale)
  • Push messaging
  • Operators at this point able to get better locations of faults than in current systems
  • Huge increase in information being collected (every minute vs once every few months in some cases)
  • able to do overlay on google maps for assets (putting 30k on one map pushes google API). Percentage utilsation
  • Some cases substation in remote areas only have a couple of customers. Not worth deploying especially since next generation metering will provide information
  • Looking at future integration once smart meters rolled out. But data from this still be decided by power regulator. Strong privacy concerns.
  • Mostly event related browsing now
  • Sorter term pushing up into transmission space
  • Interested in releasing more information to public. Need input on formats , interaction.
Share

DevOpsDownUnder – Day 2 – John Ferlio

John Ferlio – Commit Early, Deploy Often

Flickr claim to deploy multiple times per day, not for everybody

Various ways to deploy, tarballs, versions control, packages

Usaing Package management to deploy

  • Treat internal apps same as external apps
  • Works with config management
  • Use CM and VMs to give devs a sample of production on their laptop
  • Devs can then develop directly to what prod looks like

Example for Deploy for Ruby Apps

  • Use bundle to setup ruby dependencies for app
  • Simple makefile to install install bundle and run it. Bundle installs apps and then compiles them. Then files put in right dirs (prod and stating versions)
  • dh_make to create base deb env
  • delete unwanted extra debian template files.
  • create staging and a production package, basic fill in of debian control files to buuild packages
  • Package contains whole website and files. Maybe 200MB. More advanced users might want to split some stuff off.
  • He has little bit to put the bzr revision in a file in the package for later reference.

Lauchpad PPAs – lets you build your source for various debian distributions/archetectires. He Blogged about a simplier system he created (you can host locally) this a month or two back.

Use to deploy rubu-on-rails and WordPress apps.

  • Don’t have to compile things on box, don’t have to install gems, don’t have to svn update.
  • Maybe use pre-init script to stop app server during the upgrade nd restart afterwards
  • Some talk about how to sync deployment of updated app and new database schema that it requires. Thoughts seem to be that new app version should be able to handle the old schema during the transition.
  • Rails libararies evolve so fast which is why libraries need to be bundled. Other languages can sometimes use OS provided libraries
Share