OpenLI: Lawful Intercept Without the Massive Price Tag
– Shane Alcock
- Police get Warrent to ISP
- ISP Obligations
- Can’t tip off person being intercepted
- Both current and past intercepts must be private
- Can’t accept other people’s communications
- Must accept all communications
- NZ Lawful Intercept
- All Providers with more than 4000 customers must be LI capable
- Must be streamed live
- TCP/IP over tunnel
- Higher level agencies have extra requirements
- 2 seperate handovers – IRI metadata for calls, IP sessions. CC= data packets
- Open LI
- $10,000s – $100,000s costs to impliment and license from vendors
- WAND had some experise in packet collection
- Known my NZ Network Operator community
- Voluntary contributions from NZ Network Operators
- $10k+ each
- Buys 50% of my time to work on it for a year.
- Avoiding Free Rider problem
- Early access for supporters
- Dev assistence with deployment
- Priority support for bugs and features
- Building Blocks
- Developed and tested on Debian
- Should work on other Linux flavours
- Written in C – fast and likes writing C
- Use libtrace from WAND
- Data Plane Develop Kit
- Provisioner
- Interface for operators
- Not very busy
- Collector
- Comms from Provisioner
- Intercept instructions
- Recommended run on bare-metal
- 1RU Server with 10G interface with DPDK support
- Supports multiple collectors
- Mediator
- Gets data from Collector
- Forwards to Agency based on instructions from Provisioner
- Target Identification
- Nothing on the packets linked to target user
- People get dynamic IPs, can change
- For VOIP calls need to know RDP port
- SIP for VIP , Radius to IP to ID the user’s IPs/Ports
- Deriving caller identities from SIP packets can be tricky. Other headers can be used, depends on various factors
- Performance Matters
- 1Gb/s plans are available to residential customers
- ISP may have multiple customers being intercepted. Collector must not drop packets
- Aim to support multiple Gb/s of data
- libtrace lets use spread load across multiple interfaces, cpus etc
- But packets now be in multiple threads
- Lots of threads to keep things all in sync
- Status
- All done and deployed by at least one ISP
- Core fucntionally in place
- 500k PPS with DPDK
- On https://github.com/wanduow/openli
- Future
- Build user-driver community around the software
- Questions
- Can it handle a hotel? – maybe
- ISPs or police contributing? – Not yet
- What have people been doing so far? – They have been gettign away with saying they will use this
- What about bad guys using? – This probably doesn’t give them any more functionality
- Larger Operators? – Gone with Vendor Solutions
- Overseas Interest? – One from Khazakstan , but targetted at small operators
- Why not Rust, worry about parsing data – Didn’t have time to learn Rust
But Mummy I don’t want to use CUDA – Open source GPU compute
Dave Airlie
- Use Cases
- AI/ML – Tensorflow
- HPC – On big supercomputers
- Scientific – Big datasets, maybe not on big clusters
- What APIs Exist
- CUDA
- NVIDIA defined
- Closed Source
- C++ Based single source
- Lots of support libraries ( BLAS, CiDNN ) from NVIDIA
- API – HIP
- AMD Defined
- Sourcecode released on github
- C++ based single source
- OPenCL
- Khronos Standard
- Open and Closed implimentations
- 1.2 v 2.0
- OpenCL C/C++ Not single source (GPU and CPU code separate)
- Online vs offline compilation (Online means final compilation at run time)
- SPIR-V kernel
- SYCL
- Khronos Standard
- C++ Single source
- CPU Launch via OpenMP
- GPU launch via OpenCL
- Closed (codeplay) vs Open(triSYS)
- Opening of implementation in Progress (from Intel – Jan 2019)
- Others
- C++AMP – MS
- OPenMP – Gettign better for GPUs
- OpenACC
- Vulkan Compute
- Low level submission API
- Maybe
- Future
- C++ standard
- C++ ISO standards body, ongoing input from everybody
- Implementations must be tested
- Still needs execution environment
- CUDA
- Components of GPU stack
- Source -> Compiler
- Output of GPU and CPU code
- IR
- Intermediate representation
- Between source and final binary
- NVIDIA PTX – liek assemble
- OpenCL Stacks
- Vendor Specific
- LLVM Forks
- Open Source
- Development vs Release Model
- Vendors don’t want to support ports to competitors hardware
- Distro challenges
- No idea on future directions
- Large bodies of code
- Very little common code
- Forked llvm/clang everywhere in code
- Proposed Stack
- Needs reference implementation
- vendor neutral, runs on multiple vendors
- Shared Code based (eg one copy of clang, llvm)
- Standards based
- Common API for runtime
- Common IR as much as possible
- Common Tooling – eg single debugger
- SPIR-V in executable -> NIR -> HW Finaliser
- Maybe Intel’s implementation will do this
- Questions
- Vulkan on top of Metal/Molten ? – Don’t know
- Lots of other questions I didn’t understand enough to write