Let’s Talk About the Real Issues in Software Security
June 10, 2014 No CommentsSOURCE: SourceClear
The other day, I was talking to a friend of a friend who runs the appsec program at a big financial services firm in Europe.
“We just tackled fixing XSS across all of our public websites. And we nailed it!” He grinned wide. “Can you imagine the damage something like that could do?”
As a matter of fact, I could. Cross-site scripting (XSS as it has come to be known) can grab session information, account numbers, you name it. But I could also imagine the far greater damage that possible by exploiting a back-end message queue (MQ), something website users never see and know nothing about. I could vividly imagine a developer, with larceny in his heart and nothing in his bank account, using it to make one covert million-dollar transaction after another.
These days, cross-site scripting on websites is the most frequently reported security vulnerability. A few years ago, Symantec announced that it accounted for 84% of documented vulnerabilities.
No wonder XSS gets more than its fair share of press! Like the Kardashians, there’s usually nothing at all special about XSS exploits, except for their outlandishly high visibility. And visibility commands attention.
The media sees the Kardashians and so reports on them, which thereby increases their visibility so that the media reports on them even more. So it is with front-end vulnerabilities—the risks you can see with a browser. They are overreported precisely because they are seen, not just by a few security experts, but by lots of ordinary computer users.
We fear the dangers we see. We assume, by the fact of their visibility, that they are the greatest dangers we face. And so we report them and we defend against them.
On April 14 of this year, “someone burglarized Alpha Jewelry” in Oakland, New Jersey. Happens all the time. Anyone can see that the store owner should have had a better lock on the front door, right? Except, in this case, “police determined the burglar or burglars gained entry into the jewelry store after breaking through a wall at Re/Max Real Estate” next door.
Well, who could have seen that coming?
Not the store owner. Not even the beat cop. But a security expert sufficiently competent to read a blueprint would have checked the walls and floors and ceilings, not just the front door and windows.
Such an expert would never have assured the store owner that his “real vulnerability is the front door.” Sure, that door represented a visible vulnerability, but the wall, shared with a space the jewelry guy does not control, was a far more real, if far less visible, weak point.
It’s time for those of us involved in software security to shift the conversation from the visible issues to the real ones.
To assume a vulnerability significant because it is visible on the front-end is as irrational as dismissing a back-end vulnerability as unreal because it is invisible to everyone except a security expert or security-conscious developer.
The most significant vulnerabilities are not those anyone with a browser can detect, but those that are buried deep in code and produce no visual, front-end evidence of their existence. The fact that a front-end exploit like XSS accounts for more than 8 out of 10 reported web security issues does not mean it is 80 percent more dangerous—more real—than any other issue. It means only that it is reported a lot more.
An exploit buried in the back end can do greater damage and can keep doing it over a longer period precisely because, like the burglars breaking through the jewelry store wall, it defies detection by anyone without specialized knowledge.
Specialized knowledge. The services of a security expert with a background in construction might have prevented the Alpha Jewelry heist. A better bet, however, would have been hiring a security-savvy architect to design the store in the first place.
If the real issues in software security are the grossly underreported back-end vulnerabilities—the weaknesses built into the code—isn’t it about time we both required and empowered developers to write security deep into the code they themselves create?
The only rational answer is yes.
– by Mark Curphey, SourceClear