Security, Risk and Cyber

Risk is fundamental to security. The analysis of risk permeates every aspect of our job. Our customers hire us to address their specific concerns and tolerances, balancing mitigations and vulnerabilities against cost and impact. But these trade-offs and decisions often belie that we are really balancing between present and future costs. How much are we willing to invest today to forestall a possible attack tomorrow? After all, if system is never attacked, no mitigations are needed. If you place a USB fob in a barrel of concrete, sealed in a vat of lead and dropped in the Marianas Trench, do you really need to worry about encrypting the files it contains? On a greyer plain, if you tag a USB fob, create sign out and sign in controls and inspection points, limit the fob’s use to a particular room, and put in place a decommission policy that requires grinding it to dust at the end of its life, do you need to encrypt the files? Maybe, but maybe not. Trading off technical controls like encryption for operational controls like sign out sheets transitions risk from the immediate to the future. Do we invest more in the front end creation of the system that performs I/O on the USB fob, including all of the key management infrastructure, or do we pass and focus on the later operation of the system and the fob? Or do we need both? The impacts of shifting and reassigning risk like this are what we as system security engineers are paid to do every day. These are healthy and important discussions (and arguments) to have as we create new products. The problems occur when no analysis occurs on the true vulnerabilities and risks, but instead decisions are made based on expediency and a lack of understanding. Many of the issues currently facing the world of security, cyber and beyond, are a reflection of decisions like these made years and decades ago. The risk was shifted forward and we are now reaping the harvest.

This historical shifting of the risk is no secret. The issues we face today are not new. Privilege escalation was first identified as a concern for software systems in 1974. How many CERT advisories in the last year (month?) have at their crux a privilege escalation issue? Professionals have been pointing out for decades the “hidden” issues being built into the world around us. Privilege escalation, privilege separation, loss of confidentiality, lack of integrity… the list goes on and on. So why are all of these bugs still popping up? Every month we learn of new cyber initiatives to combat a new cyber threat. Why? Because of assignment of risk and the final adjudication of risk.

Assignment and adjudication of risk. So easy to say and yet so hard to do. Historically each hardware and software implementation presents risks and as an aggregate those risks were (and are) easier to push out into the future. Not due to any true acceptance of the possible problems over the life of the system, but due to the immediate risk of cost and schedule during creation. The leadership in those programs in effect assigned the risk to future users and developers. You find this in projects every day. A security engineer advises a program to scan their source code to check for specific errors and issues. The leadership determines that such scans create too much of a delay the in the project, so scans are dropped. When the risks are pointed out, the response will rarely be “we don’t believe those are possible risks” but instead something like “we will mitigate those issues operationally”. This allows a positive answer to be created (not “we are ignoring the risk” but instead “we are moving the risk”) and the expense to be pushed down the road. As with most things in security, there is never really a problem until there is really a problem. For many years this worked because the skill set to leverage the technical issues associated with these risks was rare compared with the population at large. That is no longer the case. Which brings us back to cyber initiatives.

Few organizations want to admit to mishandling an issue that impacts their mission. Even fewer want to admit the problem has existed for decades. If you never allow for a final adjudication of your risk there is zero chance that the adjudication will be wrong. Organizations have been quietly pushing their risk forward, year after year, and now they have to face a very ugly reality. The solution? Cyber. Now the old risk is effectively wiped alway and a new unaddressed risk remains. This is not new and not in any way unique to system security. Technical issues are often pushed aside in organizations for fear of being wrong. Fear of failure and fear of being associated with problems are why consultants are so important. No matter how skilled an organization’s technical staff are, acting on their advice effectively forces management to own up to problems the staff identifies. On the other hand, if you bring in a consultant, the problems become associated with that outside agency. In many ways the consultants become the holder of that risk. Of course the impact of the risk still lies squarely on the organization where the issue resides, but the appearance and causality may now be redirected. In a world of weekly, monthly and quarterly performance metrics, a brief redirection is often all that is needed.

This carries over to the new world of cyber as well. In many ways the world of cyber is a well advertised assessment of what organizations were dealing with security issues and what organizations weren’t. When an organization announces a new cyber division or cyber initiative, what they are really saying is “hi, look at me. I’ve had my head stuck in the sand for decades and I’ve finally decided to own up to my problems.” The next time you see an established organization announce a Cyber initiative, ask yourself what were they doing to address these same problems 30 years ago? If you have never read it, I highly recommend Clifford Stoll’s “The Cuckoo’s Egg“. The last decade has been a slow roll of this Cyber bandwagon, and that roll is only going to accelerate as the term attracts more money. But giving a problem a new name still doesn’t address the problem. The risks of software attacks still need to be addressed as systems are created. Arguments must be made on how to mitigate vulnerabilities to critical information and critical functions. Organizations are still facing trade-offs between what to incorporate now or add later. Security professionals are still making the same recommendations we’ve been making since (at least for me) the late 80’s. The biggest difference now is how the hardware and software thread across everything we interact with, from watches to pacemakers to our cars. The word “cyber” may be thrown in with the hope garnering a little more attention, but the same decisions will still face each organization’s leaders. Do we accept and deal with the risk today, or punt and worry tomorrow? System Security Engineers will be repeating the same concerns. The real question is whether leaders will actually listen this time.

(hic sunt dracones – nunc et semper)

Posted in Uncategorized | Tagged | Leave a comment

Someone asked the other day about software security.

How would I evaluate a software project to determine how “secure” their software is and what is needed to bridge gaps? I thought for a moment and then asked them “how portable is your software?” Not portable from a software architect’s “here’s what we should buy” perspective, but from the point of the craftsmanship: how many different platforms can you compile your code base on and get a usable system? This the real definition of portability, and one that takes a great first cut at creating a secure code base.

There are different aspects to software security that reflect the different pieces of creating a working software product – acquisition, development, testing, delivery – but core security issues tend to be very technical and implementation dependent. This is a reflection of how you attack a software system. Buffer overflows in executables, data spills via network weaknesses, reverse engineering compiled code, grabbing bits of key mat via DPA – these are all highly technical concerns and just a sampling of what a security engineer may face in protecting a system. Creating portable software deals with issues with a similar breadth and at the same technical level (which is why so many people try to purchase their way out, and why that never really works for portability or security). In dealing with byte order, structure alignment, compiler and architecture differences and all the other problems you embrace as a developer trying to compile against multiple targets, you must completely engage in the effort. There is no way to phone it in and not face immediate problems. The end effect is that the development staff has to think. You can’t have a single person driving a large code base through this technical labyrinth at the last minute. Image compiling a large code base again a small ARM target, an Intel target and a FreeScale target. Now add to the mix three different compilers per target. That is a code base that is going to get beat on and reviewed. That is a code base that is going to be under configuration control with quality checks and tests to make sure a fix for one target doesn’t break the others. And all this extra effort goes a long way toward making the code inherently secure. Let’s face it, many security sins pop up because of laziness and rushing. Create an environment with corrective forces to counter these kinds of problems and you can focus on more subtle concerns. Portability forces organization, competence and effort, and that’s why it wins.

And that is why my response concerning security was “how portable is your code?” If you have one person sitting in a cube hammering out code against a single prepackaged target, there is no telling what’s going on. If, however, you’ve got people constantly checking and revising, getting feedback from different compilers, dealing with different protocols and architecture issues, then you’ve got an environment that takes you 80% of the way. A lot of the heavy lifting is already done. With that foundation we just apply a little help in some specific areas and you end up with some rock solid code.

But of course there is the rub. Portability and security are both long lead items. People convince themselves that portability is something they can worry about later (and then realize sooner that a minor update to their platform eco-system reveals all of their non-portable warts). The same is said for security – it’s never a problem… until it’s a problem. In both cases going back to “fix” the gaps is far more expensive than ironing them out as you go. Unfortunately most managers are far more concerned with today’s costs than tomorrow’s risks. But that is a whole other topic…

Posted on by tacitus | Leave a comment