Skip to content
Keynote: How Adversaries Use AI by Jana Dekanovska
- Adversary
- Nation States
- Ecrime
- Hactivism
- Trends
- High Profile Ecrime attacks – Ransomware -> Data extortion
- Malware-Free Attacks – Phish, Social engineering to get in rather than malware
- Cloud Consciousness
- Espionage – Focuses in Eastern Europe and Middle East
- Vulnerability Exploitation – Not just zero days, Takes while to learn to leverage vuls
- Cloud Consciousness – Adversary knows they are in the cloud, have to operate in it.
- Generative AI
- Code Generation
- Social Engineer – Help people sound like Native Speakers, improve wording
- Prompt Injection
- Big Four States sponsoring attacks – China, North Korea, Iran, Russia
- North Korea – Often after money
- Russia, Iran – Concentrating on local adversaries
- China
- 1m personal in Cyber Security
- Get as much data as possible
- Elections
- Won’t be hacking into voting systems
- Will be generating news, stories, content and targeting populations
- Crime Operations
- GenAI helps efficiency and Speed of attacks
- Average Breakout time faster from 10h in 2018 to 1h now
- Members from around the world, at leats one from Australia
- Using ChatGPT to help out during intrusions to understand what they are seeing
- Using ChatGPT to generate scripts
Consistent Eventually Replication Database by William Brown
- Sites go down. Lets have multiple sites for our database
- CAP Theorem
- PostgresSQL Database
- Active Primary + Standby
- Always Consistent
- Promote passive to active in event of outage
- Availability
- But not partition tolerant
- etcd
- Nodes elect active node which handles writes. Passive nodes go offline then others are still happy
- If active node fails then new active node elected and handles writes
- Not availbale. Since if only one node then it will go to sleep cause it doesn’t know state of other nodes (dead or just unreachable)
- Active Directory
- If node disconnected then it will just keep serving old data
- reads and writes always services even if they are out of contact with other nodes
- Not consistent
- Kanidm
- identity management database
- Want availability and partition tolerance
- Because we want disconnected nodes to still handle reads and writes (eg for branch office that is off internet)
- Also want to be able to scale very high, single node can’t handle all the writes
- Building and Design
- Simultaneous writes have to happen on multiple servers, what happens if writes overlap. Changes to same record on different servers
- ” What would Postgres do? “
- Have nanosecond timestamps. Apply events nicely in order, only worry about conflicts. Use Lamport Clock (which only goes forward)
- What happens if the timestamps match?
- Servers get a uuid, timestamp gets uuid added to it so one server is slightly newer
- Both servers can go though process in isolation and get the same outputted database content
- Lots more stuff but I got lost
- Most of your code will be doing weird paths. And they must all be tested.
- Complaint that academic papers are very hard to read. Difficult to translate into code.
Next Generation Authorisation – a developers guide to Cedar by Ricardo Sueiras
- Authorisation is hard
- Ceder
- DSL around authorisation
- Policy Language
- Evaluation and Authorisation Engine
- Easy to Analise
- Authorisation Language
Managing the Madness of Cloud Logging by Alistair Chapman
- The use case
- All vendors put their logs in weird places and in weird sorts of ways. All differently
- Different defaults for different events
- Inconsistent event formats –
- Changes must be proactive – You have to turn on before you need it
- Configuration isn’t static – VEndor can change around the format with little worning
- Very easy to access the platform APIs from a VM.
- Easy to get on a VM if you have access to the Cloud platform
- Platform Security Tools
- Has access to all logs and can correlate events
- Doesn’t work well if you are not 100% using their product. ie Multi-cloud
- Can cost a lot, requires agents to be deployed
- Integrating with your own SIEM platform
- Hard to push logs out to external sources sometimes
- Can get all 3 into splunk, loki, elastic
- You have to duplicate with the cloud provider has already done
- Assess your requirements
- How much do you need live correlation vs reviewing after something happened
- Need to plan ahead
- OSCF, OTel, ECS – Standards. Pick one and use for everything
- Try log everything. Audit events, Performance metrics, Billing
- But obvious lots of logs cost logs of money
- Make it actionable – Discoverability and correlation. Automation
- Taming log Chaos
- Learn from Incidents – What sort of thing happens, what did you need availbale
- Test assumptions – eg How trusted is “internal”
- Log your logging – How would you know it is not working
- Document everything – Make it easier to detect deviations from norm
- Have processes/standards for the teams generating the events (eg what tags to use)
- Prioritise common mistakes
- Opportunity for learning
- Don’t forget to train the humans
- Think Holistically
- App security is more than just code
- Automation and tooling will help but not solve anything
- If you don’t have a security plan… Make one
- Common problems
- Devs will often post key to github
- github has a feature to block common keys, must be enabled
- Summary
- The logs you gather must be actionable
- Get familiar with the logs, and verify they actually work they way you think
- Put the logs in one place if you can
- Plan for the worst
- Don’t let the logs overwhelm you. But don’t leave important events unlogged
- The fewer platforms you use the easier it is