Mirror, mirror, on the wall: testing Conway’s Law in open source communities – Lindsay Holmwood
- The map between the technical organisation and the technical structure.
- Easy to find who owns something, don’t have to keep two maps in your head
- Needs flexibility of the organisation structure in order to support flexibility in a technical design
- Conway’s “Law” really just adage
- Complexity frequently takes the form of hierarchy
- Organisations that mirror perform badly in rapidly changing and innovative enviroments
Metrics that Matter – Alison Polton-Simon (Thoughtworks)
- Metrics Mania – Lots of focus on it everywhere ( fitbits, google analytics, etc)
- How to help teams improve CD process
- Define CD
- Software consistently in a deployable state
- Get fast, automated feedback
- Do push-button deployments
- Identifying metrics that mattered
- Talked to people
- Contextual observation
- Rapid prototyping
- Pilot offering
- 4 big metrics
- Deploy ready builds
- Cycle time
- Mean time between failures
- Mean time to recover
- Number of Deploy-ready builds
- How many builds are ready for production?
- Routine commits
- Testing you can trust
- Product + Development collaboration
- Cycle Time
- Time it takes to go from a commit to a deploy
- Efficient testing (test subset first, faster testing)
- Appropriate parallelization (lots of build agents)
- Optimise build resources
- Case Study
- Monolithic Codebase
- Hand-rolled build system
- Unreliable environments ( tests and builds fail at random )
- Validating a Pull Request can take 8 hours
- Coupled code: isolated teams
- Wide range of maturity in testing (some no test, some 95% coverage)
- No understanding of the build system
- Releases routinely delay (10 months!) or done “under the radar”
- Focus in case study
- Reducing cycle time, increasing reliability
- Extracted services from monolith
- Pipelines configured as code
- Build infrastructure provisioned as docker and ansible
- Results:
- Cycle time for one team 4-5h -> 1:23
- Deploy ready builds 1 per 3-8 weeks -> weekly
- Mean time between failures
- Quick feedback early on
- Robust validation
- Strong local builds
- Should not be done by reducing number of releases
- Mean time to recover
- How long back to green?
- Monitoring of production
- Automated rollback process
- Informative logging
- Case Study 2
- 1.27 million lines of code
- High cyclomatic complexity
- Tightly coupled
- Long-running but frequently failing testing
- Isolated teams
- Pipeline run duration 10h -> 15m
- MTTR Never -> 50 hours
- Cycle time 18d -> 10d
- Created a dashboard for the metrics
- Meaningless Metrics
- The company will build whatever the CEO decides to measure
- Lines of code produced
- Number of Bugs resolved. – real life duplicates Dilbert
- Developers Hours / Story Points
- Problems
- Lack of team buy-in
- Easy to agme
- Unintended consiquences
- Measuring inputs, not impacts
- Make your own metrics
- Map your path to production
- Highlights pain points
- collaborate
- Experiment