Our goal is to promote improved understanding of bugs. We want it to be possible to learn the relative strengths and weaknesses of bug finding techniques, under fair and concrete evaluation. Further, we aim to make evaluation cheap and frequent, meaning that results are always fresh and relevant. We also want to be able to investigate bugs scientifically, to determine if there are features or measures that indicate they are easier or harder to find. Bug finding should be better than it is, and we should have ways to know, for sure, when it gets better.
We hope that Rode0day can serve these purposes. Archived results from past competitions will include source code identifying bug root causes as well as inputs that trigger bugs. These corpora can be used by researchers and practitioners to investigate new ideas, tune parameters, and diagnose failures. These corpora will be a valuable community resource, enabling not just hill-climbing to build better bug finders, but also scientific investigation. For example, one might hypothesize that bugs whose root causes are located nearer the start of a program are easier to discover (using current techniques) than those that are farther from the start. With a few months of Rode0day corpora in hand, this kind of claim might be investigated, systematically and scientifically. However, we also want Rode0day to be exciting, which is why we will absolutely have scoreboards and rankings and winners. It's fun to be in first place, and even more fun to unseat the leader. We want people to learn from Rode0day, but we also want them to play.
We make use of the automated vulnerability injection technology LAVA to create buggy binaries paired with inputs known to trigger exactly those bugs. LAVA employs a precise, whole-system, dynamic taint analysis to locate attacker controlled data in a program, and uses this along with source-to-source transformation to both inject vulnerabilities as well as other benign chaff-like modifications. The result is high-quality ground truth which we think can be used to evaluate the detection rates of vulnerability discovery systems. LAVA has been used to run AutoCTF, a week-long capture-the-flag competition that used automatically-generated challenges. The LAVA system was presented at IEEE Security and Privacy in 2016.
Rode0day is a collaboration between researchers at MIT, MIT Lincoln Laboratory, and NYU. If you have questions, please email the organizers at rode0day@mit.edu. Rode0day is brought to you by the following people: