Keynote SSTIC 2009 – Fuzzing, past present and future

Presentation: Ari Takanen

Sofwares have a « vulnerability window« , just after releasing.
The point of this presentation is not to disucuss about software’s defects (zero days).

Fuzzing have some known problems, like finding metrics, choosing the right tool, focusing on right resources to find more bugs.
And of course, the vendors motivation to use fuzzing on their own applications.

A good fuzzer have to check 3 main categories

  1. Features
  2. Performance
  3. Robusteness

The original fuzzing system was completely random (no link with the software’s protocol). But it used the interface model (command line parameters) ! The results where not really interesting.

In sumary:
Always have a model to fuzz (protocols, features)
Watch for instrumentation (mem leaks, mem corruptions, business logic, etc…)
And of course, always use automation (generating, executing, analysing).

Note that : Fuzzing != robustness testing.
Moste of fuzzers doesnt have random aspect.

There are 2 main techniques :
Mutation (non intelligent, semi random modifications) and generation(intelligent, model based)

PROTOS (the famous project Ari working on) was created in order to avoid any random part, to focus on real intelligent tests, with precise specification of the software (protocols and features).

What about the future ?
More complex behavior, like mutation tests, more complex block-based tests. And automated protocols model building, and syntax and semantic anomalisation.

Some interesting information:
Best fuzzers have 70% bug detecting.
Associating 2 fuzzers : 70-90%

Fuzzing is now more present into indutry. All major software security groups are using fuzzing. Should be used more often in Quality Assurance of a product.