Well so "infosec people" can describe at least two distinct classes of professional technologist. There are "security researchers" and there's "security contractors" and they may sound like the same thing but they're actually very different. Researchers are the guys who produce CVE tickets, they have their fair share of dumshits, cringe lords, and mediocre joes just trying to collect a paycheck but in general they serve a pretty important role and there's some real talent there. Then there's security contractors that are hired goons whose occupation consists of equal parts fear mongering and beating real programmers over the head with automated extremely primitive "security reports". The latter group is a parasite upon the former who are themselves kind of the jedis of the developmental world: we know they're smart and they need to exist, ultimately they're a good thing but they really are a pain in the ass sometimes.
When you hear IT people or developers bitching about security folks they're generally of the contractor variety.
But I mean security breaches are pretty common, so what is it that makes other people continuously dismissive of some elements in the security community? The answer, at least as I see it, is twofold. The simpler answer is development is hard and most programmers are fucking retarded. Like cargo cult sorry sons of bitches who, if they were carpenters instead of programmers, would be able to hammer nails sorta ok and nothing else who have been carried through their careers because people felt sorry for them. They have a hard time composing systems they haven't seen a hundred times before and any thought towards adversarial usage just isn't there, just a handful of dogmas (null checking in java-land is a great example of this, although not a security issue per se, it demonstrates the ability of people to cobble together software with no actual insight into the process, leaning instead on received wisdom). Sad but true.
But of course that isn't everyone, legitimately smart people will ignore security issues too although usually at a much lower rate. Why does that happen? I mean it's easy to explain "I didn't see this" because systems are complex but when someone turns up at your desk with an issue and you're like "nahhh" isn't that just negligence? On a level yes, but when prod is fucked and an RCA lands at your feet that's a shitty place to be, no one would sign up for that out of laziness. The answer is that the rate of false positives in security analysis (usually automated security analysis, and a similar issue exists with automated analysis in general) is so high as to make it basically meaningless, or at least in most cases I've seen. Donald Norman has done a fair amount of research on this but it's really just lost on sec folks because "lol we're not designers", but it's kind of obvious, when the majority of "security issues" you're confronted with are nothing but fear mongering it's hard to take them seriously.
An anecdote: a dude shows up at my desk the other day. "There's an XSS vulnerability in your project", we look at it, we take a number and render it into the markup without escaping it because of special circumstances. Ok, I understand, automated checker sees there's an unsanitized field getting sent to the user. But it's a number, the type system guarantees us in a formally provable way it's a number, it can render as a string of exactly 10 characters none of which can constitute an escape code or really do anything. "But what if your DB returns something that isn't a number?" Well ok, so let's say the DB randomly stops doing like the one thing it's supposed to do which is maintain relational integrity. Let's say it magically does that for some reason. Then the cast fails, we 500 out, and production support deals with the magical database. "What if it doesn't though?" Fucking what if in an act of divine intervention RNJesus decides every UUID we generate is going to be exactly the same for the next 3 years? Well we're fucked 12 ways to sunday, the world economy is going to collapse and we'll probably all die in a tidal wave but it's not my job to deal with impossible hypotheticals. This shit show went through like 5 levels of managers, all but one of whom couldn't write hello world and the decision comes back I'm wrong so we eat shit and stack another half second of load time onto requests because some dipshit's audit tool needs to be appeased.
Meanwhile I pointed out a timing attack and some architect is all like "nahh, no way anyone's going to figure out that exists" so in a few months a major american financial institution is going to have a product with a known (to three grunt programmers and a lead) timing attack in it and we're all just hoping really hard no one sees that shit. Try escalating it you say? Yeah well that's hard to do when no one who can do anything knows what the fuck time to first byte means, much less what a timing attack is.
IN SHORT technology is fucked and if we're lucky we'll all die from a natural disaster before AI comes about and decides to go all "I have no mouth and I must scream" on us because some dumbshit did something stupid and there's just enough stupidity around him to propagate his fuckup far enough to fuck us all.
Post last edited by Lanny at 2016-11-21T02:20:55.813037+00:00