by Eireann Leverett, Marion Marschalek,

Summary : Marion is dangerous. Eireann is pretty. So, pretty.
IoCs (Indicators of Compromise) are a state-of-the-art method to describe the technological aspects of an incident. Currently, we see IoCs composed of rather "cheap" indicators; file hashes, domain names, IP addresses. All have different "cost" attached, in other words we can put two price tags on each indicator: costly for the attacker to change and costly for the defender to apply.
Some IoCs are harder to change for the attackers than others, and we rank such indicators in our presentation, along with the reasoning and experiments that demonstrate this. Some IoCs though are harder for a defender to deploy than others, we analyse and rank these with the rigor you've come to expect from such buccaneers of bitshifting as ourselves. Finally, IoCs naturally also have an associated expiration date, rendering them useless as soon as the attacker managed to adapt. The goal of this research is to build smarter indicators. We aim for indicators, easy thus cheap to extract, but expensive to change. We will present a proof of concept, showing how to extract a plethora of metrics from malicious binaries using a disassembly framework and graph analysis tools, which relate to malware complexity rather than describing meta information. We will discuss each of the metrics, how expensive they are to extract, how resilient they are against changes applied by the attacker, how much information they carry, how closely they are tied to the cost of the actual attack.
Next we analyze the frequency of indicators found in the MISP platform, and compare it to a theoretically ideal ranking. One that would be much more valuable if the indicators were less ephemeral for the attacker and more easily deployable for the defenders. We build our research on the assumption of MISP being the de-facto standard of how indicators are being stored and shared.
Lastly, we go over the MISP taxonomy of IoCs and discuss what indicators we might prefer for the future. This will hopefully lead to further proposals for more indicators in the future, and we'll make sure the audience knows how to propose them in the future.
We'll conclude with a discussion of how cybercovigilance, and post market surveillance are the types of measurement we need most in the community going forward. The frequency of individual technologies vulnerability and exploitation are currently missing from most debates. This is something we hope will change substantially both from this work, and the work of others.