Monday, July 25, 2016

This Old Vulnerability: Guest Post: Vineetha Paruchuri on Modeling How Vulnerability is Created, Rather than Remediated

[Editor's Note: Vineetha's guest blog is a companion piece to the Lab Mouse post found here]

It all started on Twitter when I called Bailey out on his crappy taste in music (naturally, he vehemently disagrees with the “crappy” part). [Editor’s Note: My musical tastes are sublime and don't include Evanescence…] [Author’s Retort: N-O-P-E]


We got to ranting about InfoSec things in private; initially felt that nuances in textual conversations usually get lost in translation, and one might often need to explain further. It quickly became evident that this was not the case in our discussions.


Of course, like your typical hyper-rational engineers, we instinctively started modeling our behavior - analyzing why we seem to process information very similarly, how people intellectually process things in general, how that affects the code they write, or the way they visualize technical problems, or the way they interpret security concepts. This line of thought extended to our discussion on vulnerabilities.


For the better part of the past year, I have passively been mulling over specific combinations/variations of arguments from a couple of papers, because I saw immense potential for these ideas in practical scenarios. Visualizing these arguments from the perspective of vulnerability identification and disclosure (residual thoughts from my discussion with Bailey) gave me the much-needed context that tied some things together.


In most cases, at the core, all vulnerabilities boil down to something that the developer/architect/whoever overlooked, that someone else noticed. To simplify terminology, let’s call this “someone else” an attacker, and the “developer/architect/whoever” a systems designer. The system is ultimately designed for the end-user.


The attacker might see things that the systems designer missed, because attackers visualize the system quite differently. Further, the end-user might (un)intentionally perform some action(s) that might send the system into a state not initially modeled by the systems designer. In such cases when the system does not behave as expected (and also in other cases e.g. when the end-user doesn’t get the desired functionality), the end-user often figures out workarounds to get the job done. Such workarounds routinely circumvent established security mechanisms in place too; once the system is not in a documented state, there is no saying what security measures were bypassed because of the workaround.


In essence, when analyzing from the context of actor-behavior, vulnerabilities can be the result of any (or all) of the above factors, or some combination thereof. At a glance, it looks like delineating and formalizing these factors would have some value from the perspective of vulnerability analysis.


Based on the above reasoning, we can delineate the major factors contributing to software/system vulnerabilities from the actor-behavior standpoint as follows:


First, the issue of what the systems designer doesn’t see that others might see: the blindspots. In “It’s the Psychology Stupid: How Heuristics Explain Software Vulnerabilities and How Priming Can Illuminate Developer’s Blind Spots”, Oliveira et. al. discuss the idea that “software vulnerabilities are blind spots in the developers’ heuristic decision-making process”.


Second, the issue of how the attacker-mindset differs from other actors’ in the system, and what that means. Quite a lot has been written on this topic (hacker behavior/motivations) from the perspective of sociology/psychology, law/policy, technology etc., but some interesting thoughts on how to cultivate an attacker-mindset, and what the “hacker methodology” is, are given in “What Hackers Learn That The Rest Of Us Don’t” by Sergey Bratus.


Third, the obvious existence of differential perceptions amongst various actors in the system, the resultant security circumvention and suboptimally-defended systems exposed to vulnerabilities. In “Mismorphism: A Semiotic Model Of Computer Security Circumvention (Extended Version)”, Smith et. al. examine security circumvention using a model based on semiotic triads. How differential perceptions affect systems has been explored from the perspective of security circumvention in the paper, but it got me thinking about how the same idea can also be explored in settings not necessarily involving security circumvention.


Although not all of these arguments apply directly (they all certainly apply in other ways, more on that in another post, another time perhaps) to the vulnerability we are currently discussing, I briefly touched upon them because all these issues are interrelated, and the larger issue of vulnerability identification/mitigation is better served when such component-issues are discussed together. In essence, understanding the core logic behind each of these arguments and tailoring it to apply to specific contexts might help in better vulnerability detection and mitigation. Plus, anyone looking at the same issues now has a decent starting point on where to find relevant information in case they want to explore these issues further.


That said, in the context of the vulnerability that’s currently being discussed, apart from thinking about langsec (but of course! Again, more on that some other time), further analysis of the first issue listed above concerning developer blind spots could prove quite useful. The primary argument comes from the paper “It’s the Psychology Stupid: How Heuristics Explain Software Vulnerabilities and How Priming Can Illuminate Developer’s Blind Spots” by Oliveira et. al.


The learnings from Oliveira’s paper directly play into the the remedial measures Bailey touched upon in his post - enforcing organizational coding standards, evaluating the context of each pointer, and improving coding practices etc. Rather than looking at the issue retrospectively, such as in the context of code reviews, Oliveira et. al’s paper outlines how we can prime the developers to minimize such blind spots while coding (of course, code reviews can/should still be done, but increasing the quality of the code is always the primary goal).


Oliveira’s paper explores a new hypothesis that software vulnerabilities arise due to blind spots in developers’ heuristic decision-making processes. Another hypothesis (that neatly dovetails with the former) is also investigated in tandem, as to whether priming software developers on the spot (as opposed to drawing from previous security knowledge), and alerting developers to the possibility of vulnerabilities in real time would be effective in changing developer-perspective on security, eventually making security-thinking a part of developers’ repertoire of heuristics.


This paper points out, quite rightly, that “The frequent condemnation of security education and criticism on software developers, however, do not help to reason about the root causes of security vulnerabilities”.


Psychological research shows that, due to limitations in humans’ working memory capacity,  humans often engage in heuristic-based decision-making processes. Heuristics are simple computational models that help solve problems without needing to consider all the information available. Because of their relative simplicity, heuristics require less cognitive effort, and hence they are an adaptive response to humans’ short term working memory when dealing with complex problems with a large amount of information. In such situations, due to limitations in working memory capacity, humans make “simplified, suboptimal decisions regardless of the rich information available”. We need to consider such cognitive limitations if we want developers to come up with more secure code; security education and/or code reviews alone wouldn’t be effective in making code safer.


Oliveira’s paper proves this primary hypothesis, and suggests priming, as in explicitly cueing developers on-the-spot, as an effective mechanism to eventually incorporate security-thinking as a part of developers’ cognitive processing. One of the ways the paper proposes to do this is to have developer-interfaces (such as IDEs, text editors, compilers etc.) display security information pertinent to the context of the current working scenario.


Naturally, further research needs to be done regarding what specific security information is useful, and what interfaces work best, if there are other/better ways to prime developers etc., but the point here is that more security education and more code reviews alone are not the answer to preventing such vulnerabilities.


One needs to get to the root of the problem - be it addressing systemic insecurity in the coding language, mitigating developer blind spots, or bridging differentials in actor-perspectives.


So why should we care about mechanisms factoring in actor-behavior when code reviews, semantic checkers etc. work just fine?


Firstly, they clearly don’t, at least not well enough (also, maybe things working just fine doesn’t quite cut it for some folks).


Second, this is also what someone dealing with enough vulnerability identification and mitigation might instinctually reason out (but since we technologists tend to trust empirical evidence better, the papers I cited should do the job?). For example, in the context of the current vulnerability, Bailey says the following:


“But, if engineers don't notice this, or if there is no warning message printed by the compiler, or if an IDE is being used that doesn't adequately highlight the warning messages, this can result in critical flaws in software.”


I know for a fact that he hasn’t read Oliveira’s paper before he wrote that (not even sure he read it beyond the abstract even now). In fact, looking at what happened in the code and how the whole thing played out prompted me to think about how priming could apply here, and then I saw that Bailey reasoned it out the same way too!


So yes, even in the worst case, considering that such mechanisms factoring in actor-behavior would not be useful in any other context (while *I* think that they most certainly would be) - at least a few such subclasses of fairly intractable bugs (like the current one) can be caught/mitigated more effectively.


Third, solving for mitigating a vulnerability at the source would in turn facilitate more effective mechanisms for identifying vulnerabilities. For example, if we identify the primary factors causing such vulnerabilities, we could potentially leverage that knowledge toward building more effective systematic/automatable vulnerability identification mechanisms (yes, a few formal mechanisms currently exist, but their efficacy leaves a lot to be desired, because they’re acting more as band-aids than stemming from addressing the root cause; i.e. they’re often not solving the right problem1).


What I mean to say is...  


Hence why maybe... it’s about d*** time we started looking at these issues as more than just failures in coding constructs…


< quietly sashays away and lets Bailey deal with the aftermath of any fires she lit >

Vineetha Paruchuri
M.S., Computer Science
Dartmouth College


Author’s Note: Before all ye grammar pedants come out of the woodwork to get me, the “hence why maybe” thing was intentional. (BTW Bailey, I censored out my own “damn”, thank you. Now don’t censor this “damn”, or the one I just typed; ugh, this is turning so meta). So anyway, any other (grammar) mistakes that were overlooked are totally Bailey’s fault (he seems to take “The Editor” thing a tad too seriously; so go burn him for those if you must; bye now).



“I checked it very thoroughly,” said the computer, “and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you’ve never actually known what the question is.

No comments:

Post a Comment