Five silly things to do when benchmarking your BPF program

Day 1 | 18:00 | 00:20 | K.4.201 | Dmitrii Dolgov


Note: I'm reworking this at the moment, some things won't work.

The stream isn't available yet! Check back at 18:00.
Get involved in the conversation!Join the chat

The Hitchhiker's Guide to the Galaxy has following to say about benchmarking: avoid, if at all possible. One of the reasons is that it's hard and often counterintuitive, another -- it attracts Ravenous Bugblatter Beasts.

Despite all those dangers it's still an important and underappreciated topic, and in this talk I would like to discuss cases when unexpected results arise from benchmarking of BPF programs and why they could be important. Kernel selftest benchmarks would serve as an example, but we will vary the load generation to produce different arrival distributions, introduce contention on tracepoints, or simulate swarm of network connections in user space. To make it easier, we will use home-grown tool Berserker, which was created with an intent to exercise real world BPF applications.