Open or Closed Source Code Irrelevant to Security

eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

I spend quite a bit of time talking in front of groups of people, and oneof the things I’m often asked is whether I use open source software (OSS)because I think it’s more secure than proprietary, or ”closed source”,software. I’ll bet this same topic has come up once or twice (this week)in your office, too.

Now, rest assured that debate over this issue is fraught with zealots onboth sides, and the debate isn’t likely to end with this column. I dothink, though, that my answer to the question is interesting to explorebecause it gets to the heart of some key issues regarding softwaresecurity practices.

Before I go on, I should say that, yes, I am an avid user of ”thepenguin” — Linux. I run my business almost entirely on systems runningDebian Linux (Sarge). And, yes, I like to think of myself as being prettysecurity conscious when it comes to matters of my company’s security, aswell as the security of my family’s personal information.

But that’s not why I choose to run Linux. Note, too, that I have a roadwarrior laptop that runs XP Professional, largely for full compatibilitywith the (largely) Microsoft Office world of my customers. So with thatout of the way, let’s explore the issues a bit…

Like so many engineering questions, the answer obviously is ”itdepends”. For starters, the OSS believers feel their software is moresecure because the source code is available for public scrutiny andsecurity fixes can be incorporated by any among a large community ofusers. An end user could even modify the source code to an operatingsystem or application himself to include security-related modifications.

All of this is true.

On the other hand, proprietary software believers feel their software ismore secure because the professional developers who design and implementthe code follow rigorous internal standards. These standards generallyinclude comprehensive testing of the software prior to delivery tocustomers. What’s more, the source code is not available to the generalpublic, so it’s not likely that the curious people with way too much freetime on their hands will find bugs and flaws in the software.

All of this is true as well.

In my view, both of these positions are fundamentally flawed in numerousways. And this is where it starts to get interesting.

Sure enough, OSS source code is available for all the world toscrutinize. The problem, though, is that all the world doesn’t do that.Take, for example, the ill-fated Sardonix project. It was a DARPA-fundedproject to provide a public forum for vetting OSS software and making theresults available to the world. But ‘build it and they will come’ wasn’tquite what happened. The project languished due to lack of interest andit was eventually scrapped.

Making source code available to the world does little, if anything atall, to advance the security of the software.

Then, on the proprietary side of things, there’s the notion that theirrespective developers rigorously follow security standards. Well, thatsure is not consistent with my experiences in the commercial softwarecommunity. All too often, the developers are laser-focused on coding tofunctional specifications, and the security of the software is left as anafterthought. On top of that, I’ve found that the vast majority ofsoftware developers I’ve interacted with do not understand securityissues very well.

Now, I’m obviously generalizing here and it’s quite likely that there aremany exceptions, but when I’ve polled my audiences on issues aroundattacks against software, I’ve found the IT Security folks understand theattacks but not the software, and the software engineers understand thesoftware but not the attacks.

And let’s not discount attackers’ ability to reverse engineer machinecode to find bugs and flaws. Indeed, there is compelling anecdotalevidence to support the claim that attackers use vendor patches –distributed solely in binary form — to deduce the problems they addressand develop attack tools.

Keeping source code closed to external scrutiny, in and of itself, doeslittle if anything to affect the security of the software.

The basic tenets of both sides of the open vs. closed debate don’t havemuch of anything to do with security. One can build secure or weaksoftware in either form. Building secure software requires carefulattention to security issues throughout every phase of the software’sdevelopment — from design through implementation, testing, and evendeployment. Whether the source code (and design documentation, for thatmatter) become open or closed is utterly irrelevant to the security ofthe software.

Oh, and the real answer to why I choose Linux is quite simply because I’mcomfortable with it as a user. As a UNIX desktop user since 1985 or so,I’m just more at home there. The fact that so much of today’s malwaretargets Microsoft’s operating systems and applications never factoredinto my decision (much).

Get the Free Cybersecurity Newsletter

Strengthen your organization’s IT security defenses by keeping up to date on the latest cybersecurity news, solutions, and best practices. Delivered every Monday, Tuesday and Thursday

Kenneth van Wyk Avatar

Subscribe to Cybersecurity Insider

Strengthen your organization’s IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.




Top Cybersecurity Companies

Get the Free Newsletter!

Subscribe to Cybersecurity Insider for top news, trends & analysis