I'm at the opening session of the eGovOS conference. Whitfield Diffie, Chief Security Officer at Sun and co-inventor of the Diffie-Hellamn algorithm is speaking on the security aspects of open source software. The argument comes down to:
- More eyes looking at the code means that there will be fewer bugs leading to security issues.
- More eyes looking at the code means that there will be a greater chance that bugs will be exploited to cause security issues.
"Security is political and is always associated with someone's interests." The result of this observation is that lion's share of responsibility for security falls on the end user. In a closed-source world, the end user had no options other than chosing between finished products. There aren't many choices right now in many categories. In an open source world, the code is available for inspection and correction. Practically speaking, of course, end-user here has to mean 'government" or "large organization" since individuals won't usually spend much time looking through source code.
Part of the issue is that we've modeled computers on the world of publishing when the artifact is more like an automobile than it is like a book: it has function. In the world of automobiles, all kinds of reverse engineering takes place, creating a vibrant marketplace of aftermarket parts and people who know how to modify and customize cars.
Diffie is making an analogy between the standard crytographic practice of making the system public and the keys for transmitting individual messages private. This is not done for some altruistic reason, but because of a very real belief that the system is more more secure by being public. The cryptographic system is complex and costly to engineer and thus can't be easily changed out. Thus, it pays to have the cryptographic system be well engineered and to not rely on any "secret" features in the system itself.
One common argument against open source being more secure is that "trojan horses" can more easily be inserted into the code. The open source community regularly pooh-pooh's this with "that can happen in closed source code as well." Whit makes a great comment that is an important distinction: Many large organizations who are concerned about security (i.e. the military) can control their environments much more tightly than others. For them, keeping trojan horses out by controlling who has access to it is, perhaps, possible.