Someone asked the other day about software security.
How would I evaluate a software project to determine how “secure” their software is and what is needed to bridge gaps? I thought for a moment and then asked them “how portable is your software?” Not portable from a software architect’s “here’s what we should buy” perspective, but from the point of the craftsmanship: how many different platforms can you compile your code base on and get a usable system? This the real definition of portability, and one that takes a great first cut at creating a secure code base.
There are different aspects to software security that reflect the different pieces of creating a working software product – acquisition, development, testing, delivery – but core security issues tend to be very technical and implementation dependent. This is a reflection of how you attack a software system. Buffer overflows in executables, data spills via network weaknesses, reverse engineering compiled code, grabbing bits of key mat via DPA – these are all highly technical concerns and just a sampling of what a security engineer may face in protecting a system. Creating portable software deals with issues with a similar breadth and at the same technical level (which is why so many people try to purchase their way out, and why that never really works for portability or security). In dealing with byte order, structure alignment, compiler and architecture differences and all the other problems you embrace as a developer trying to compile against multiple targets, you must completely engage in the effort. There is no way to phone it in and not face immediate problems. The end effect is that the development staff has to think. You can’t have a single person driving a large code base through this technical labyrinth at the last minute. Image compiling a large code base again a small ARM target, an Intel target and a FreeScale target. Now add to the mix three different compilers per target. That is a code base that is going to get beat on and reviewed. That is a code base that is going to be under configuration control with quality checks and tests to make sure a fix for one target doesn’t break the others. And all this extra effort goes a long way toward making the code inherently secure. Let’s face it, many security sins pop up because of laziness and rushing. Create an environment with corrective forces to counter these kinds of problems and you can focus on more subtle concerns. Portability forces organization, competence and effort, and that’s why it wins.
And that is why my response concerning security was “how portable is your code?” If you have one person sitting in a cube hammering out code against a single prepackaged target, there is no telling what’s going on. If, however, you’ve got people constantly checking and revising, getting feedback from different compilers, dealing with different protocols and architecture issues, then you’ve got an environment that takes you 80% of the way. A lot of the heavy lifting is already done. With that foundation we just apply a little help in some specific areas and you end up with some rock solid code.
But of course there is the rub. Portability and security are both long lead items. People convince themselves that portability is something they can worry about later (and then realize sooner that a minor update to their platform eco-system reveals all of their non-portable warts). The same is said for security – it’s never a problem… until it’s a problem. In both cases going back to “fix” the gaps is far more expensive than ironing them out as you go. Unfortunately most managers are far more concerned with today’s costs than tomorrow’s risks. But that is a whole other topic…