Responsibility for the Harm and Risk of Software Security Flaws presented at BlackHatDC 2011

by Cassio Goldschmidt,


Summary : Who is responsible for the harm and risk of security flaws? The advent of worldwide networks such as the internet made software security (or the lack of software security) became a problem of international proportions. There are no mathematical/statistical risk models available today to assess networked systems with interdependent failures. Without this tool, decision-makers are bound to overinvest in activities that dont generate the desired return on investment or under invest on mitigations, risking dreadful consequences. Experience suggests that no party is solely responsible for the harm and risk of software security flaws but a model of partial responsibility can only emerge once the duties and motivations of all parties are examine and understood.
State of the art practices in software development wont guarantee products free of flaws. The infinite principles of mathematics are not properly implemented in modern computer hardware without having to truncate numbers and calculations. Many of the most common operating systems, network protocols and programming languages used today were first conceived without the basic principles of security in mind. Compromises are made to maintain compatibility of newer versions of these systems with previous versions. Evolving software inherits all flaws and risks that are present in this layered and interdependent solution. Lastly, there are no formal ways to prove software correctness using neither mathematics nor definitive authority to assert the absence of vulnerabilities. The slightest coding error can lead to a fatal flaw. Without a doubt, vulnerabilities in software applications will continue to be part of our daily lives for years to come.
Decisions made by adopters such as whether to install a patch, upgrade a system or employed insecure configurations create externalities that have implications on the security of other systems. Proper cyber hygiene and education are vital to stop the proliferation of computer worms, viruses and botnets. Furthermore, end users, corporations and large governments directly influence software vendors decisions to invest on security by voting with their money every time software is purchased or pirated.
Security researchers largely influence the overall state of software security depending on the approach taken to disclose findings. While many believe full disclosure practices helped the software industry to advance security in the past, several of the most devastating computer worms were created by borrowing from information detailed by researchers full disclosure. Both incentives and penalties were created for security researchers: a number of stories of vendors suing security researchers are available in the press. Some countries enacted laws banning the use and development of hacking tools. At the same time, companies such as iDefense promoted the creation of a market for security vulnerabilities providing rewards that are larger than a years worth of salary for a software practitioner in countries such as China and India.
Effective policy and standards can serve as leverage to fix the problem either by providing incentives or penalties. Attempts such PCI created a perverse incentive that diverted decision makers goals to compliance instead of security. Stiff mandates and ineffective laws have been observed internationally. Given the fast pace of the industry, laws to combat software vulnerabilities may become obsolete before they are enacted. Alternatively, the government can use its own buying power to encourage adoption of good security standards. One example of this is the Federal Desktop Core Configuration (FDCC).
The proposed presentation is based on the research done by Cassio Goldschmidt, Sr. Manager at Symantec Corporation; Melissa J. Dark, Professor & Assistant Dean Department of Computer and Information Technology Purdue University and Hina Chaudhry, PhD. Candidate at Purdue University and is reflection of the role of each player involved in the software lifecycle and the incentives (and disincentives) they have to perform the task, the network effects of their actions and the results on the state of software security. The full text is available as a chapter of Information Assurance & Security Ethics (ISBN: 978-1-61692-245-0, hardcover. ISBN: 978-1-61692-246-7, ebook).