by Matt Miller,

Tags: Exploitation Tools Metrics

Summary : The increased difficulty of developing reliable exploits for memory safety vulnerabilities has also made it more difficult to characterize their exploitability. As a result, there is currently no well-defined or broadly agreed upon standard by which exploitability is determined. There are certainly good reasons for this: exploitability is influenced by many variables and exploit writing is generally a highly skilled and creative process. Still, the lack of an established model for determining exploitability tends to force an analyst to either prove exploitability through a working exploit or make a conservative and coarse-grained estimate of exploitability. In practice, both of these are undesirable as the first approach does not currently scale and the second approach typically assumes a worst-case scenario that does not take into account the effects that mitigations and contextual factors may have on exploitability. This can lead to an overestimation of actual risk and has made it challenging to measure how these variables are contributing to the increased difficulty of exploiting vulnerabilities.
To help improve on this situation, this presentation describes an experimental model that can be used to classify memory safety vulnerabilities and reason about their exploitability. In this model, the invariants of a vulnerability are specified using a structured and well-defined format that can be independently reviewed and verified. This specification then forms the initial state for an automata that provides an abstract representation of the primitives and techniques that facilitate or mitigate exploitation. To demonstrate the utility of this model, this presentation will demonstrate how it can be used to aid in the process of classifying a vulnerability, measuring exploitability, and enabling intelligent investment in vulnerability prevention and exploit mitigation technologies.