Security, Risk and Cyber

Risk is fundamental to security. The analysis of risk permeates every aspect of our job. Our customers hire us to address their specific concerns and tolerances, balancing mitigations and vulnerabilities against cost and impact. But these trade-offs and decisions often belie that we are really balancing between present and future costs. How much are we willing to invest today to forestall a possible attack tomorrow? After all, if system is never attacked, no mitigations are needed. If you place a USB fob in a barrel of concrete, sealed in a vat of lead and dropped in the Marianas Trench, do you really need to worry about encrypting the files it contains? On a greyer plain, if you tag a USB fob, create sign out and sign in controls and inspection points, limit the fob’s use to a particular room, and put in place a decommission policy that requires grinding it to dust at the end of its life, do you need to encrypt the files? Maybe, but maybe not. Trading off technical controls like encryption for operational controls like sign out sheets transitions risk from the immediate to the future. Do we invest more in the front end creation of the system that performs I/O on the USB fob, including all of the key management infrastructure, or do we pass and focus on the later operation of the system and the fob? Or do we need both? The impacts of shifting and reassigning risk like this are what we as system security engineers are paid to do every day. These are healthy and important discussions (and arguments) to have as we create new products. The problems occur when no analysis occurs on the true vulnerabilities and risks, but instead decisions are made based on expediency and a lack of understanding. Many of the issues currently facing the world of security, cyber and beyond, are a reflection of decisions like these made years and decades ago. The risk was shifted forward and we are now reaping the harvest.

This historical shifting of the risk is no secret. The issues we face today are not new. Privilege escalation was first identified as a concern for software systems in 1974. How many CERT advisories in the last year (month?) have at their crux a privilege escalation issue? Professionals have been pointing out for decades the “hidden” issues being built into the world around us. Privilege escalation, privilege separation, loss of confidentiality, lack of integrity… the list goes on and on. So why are all of these bugs still popping up? Every month we learn of new cyber initiatives to combat a new cyber threat. Why? Because of assignment of risk and the final adjudication of risk.

Assignment and adjudication of risk. So easy to say and yet so hard to do. Historically each hardware and software implementation presents risks and as an aggregate those risks were (and are) easier to push out into the future. Not due to any true acceptance of the possible problems over the life of the system, but due to the immediate risk of cost and schedule during creation. The leadership in those programs in effect assigned the risk to future users and developers. You find this in projects every day. A security engineer advises a program to scan their source code to check for specific errors and issues. The leadership determines that such scans create too much of a delay the in the project, so scans are dropped. When the risks are pointed out, the response will rarely be “we don’t believe those are possible risks” but instead something like “we will mitigate those issues operationally”. This allows a positive answer to be created (not “we are ignoring the risk” but instead “we are moving the risk”) and the expense to be pushed down the road. As with most things in security, there is never really a problem until there is really a problem. For many years this worked because the skill set to leverage the technical issues associated with these risks was rare compared with the population at large. That is no longer the case. Which brings us back to cyber initiatives.

Few organizations want to admit to mishandling an issue that impacts their mission. Even fewer want to admit the problem has existed for decades. If you never allow for a final adjudication of your risk there is zero chance that the adjudication will be wrong. Organizations have been quietly pushing their risk forward, year after year, and now they have to face a very ugly reality. The solution? Cyber. Now the old risk is effectively wiped alway and a new unaddressed risk remains. This is not new and not in any way unique to system security. Technical issues are often pushed aside in organizations for fear of being wrong. Fear of failure and fear of being associated with problems are why consultants are so important. No matter how skilled an organization’s technical staff are, acting on their advice effectively forces management to own up to problems the staff identifies. On the other hand, if you bring in a consultant, the problems become associated with that outside agency. In many ways the consultants become the holder of that risk. Of course the impact of the risk still lies squarely on the organization where the issue resides, but the appearance and causality may now be redirected. In a world of weekly, monthly and quarterly performance metrics, a brief redirection is often all that is needed.

This carries over to the new world of cyber as well. In many ways the world of cyber is a well advertised assessment of what organizations were dealing with security issues and what organizations weren’t. When an organization announces a new cyber division or cyber initiative, what they are really saying is “hi, look at me. I’ve had my head stuck in the sand for decades and I’ve finally decided to own up to my problems.” The next time you see an established organization announce a Cyber initiative, ask yourself what were they doing to address these same problems 30 years ago? If you have never read it, I highly recommend Clifford Stoll’s “The Cuckoo’s Egg“. The last decade has been a slow roll of this Cyber bandwagon, and that roll is only going to accelerate as the term attracts more money. But giving a problem a new name still doesn’t address the problem. The risks of software attacks still need to be addressed as systems are created. Arguments must be made on how to mitigate vulnerabilities to critical information and critical functions. Organizations are still facing trade-offs between what to incorporate now or add later. Security professionals are still making the same recommendations we’ve been making since (at least for me) the late 80’s. The biggest difference now is how the hardware and software thread across everything we interact with, from watches to pacemakers to our cars. The word “cyber” may be thrown in with the hope garnering a little more attention, but the same decisions will still face each organization’s leaders. Do we accept and deal with the risk today, or punt and worry tomorrow? System Security Engineers will be repeating the same concerns. The real question is whether leaders will actually listen this time.

(hic sunt dracones – nunc et semper)

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.