Open Source Does Not Equal Secure

Way back in 1999, I wrote about open-source software:

First, simply publishing the code does not automatically mean that people will examine it for security flaws. Security researchers are fickle and busy people. They do not have the time to examine every piece of source code that is published. So while opening up source code is a good thing, it is not a guarantee of security. I could name a dozen open source security libraries that no one has ever heard of, and no one has ever evaluated. On the other hand, the security code in Linux has been looked at by a lot of very good security engineers.

We have some new research from GitHub that bears this out. On average, vulnerabilities in their libraries go four years before being detected. From a ZDNet article:

GitHub launched a deep-dive into the state of open source security, comparing information gathered from the organization’s dependency security features and the six package ecosystems supported on the platform across October 1, 2019, to September 30, 2020, and October 1, 2018, to September 30, 2019.

Only active repositories have been included, not including forks or ‘spam’ projects. The package ecosystems analyzed are Composer, Maven, npm, NuGet, PyPi, and RubyGems.

In comparison to 2019, GitHub found that 94% of projects now rely on open source components, with close to 700 dependencies on average. Most frequently, open source dependencies are found in JavaScript—94%—as well as Ruby and .NET, at 90%, respectively.

On average, vulnerabilities can go undetected for over four years in open source projects before disclosure. A fix is then usually available in just over a month, which GitHub says “indicates clear opportunities to improve vulnerability detection.”

Open source means that the code is available for security evaluation, not that it necessarily has been evaluated by anyone. This is an important distinction.

Posted on December 3, 2020 at 11:21 AM48 Comments

Comments

Allen December 3, 2020 12:04 PM

Interesting looking at Linux from 1999 viewpoint. I would say that today, critical kernel code gets reviewed pretty well, but not perfectly. The problem is most of the code base is now external interface drivers. Many get little review. Also, the user apps in the repositories may get little to no review.

Peter Shenkin December 3, 2020 12:08 PM

It would seem reasonable for open-source providers to actively solicit examination from security experts just as they now solicit contributions from developers.

This may be a tough one, because there are so few security experts in comparison (I assume).

Also, is there any “security certification” procedure available, whereby an organization can state that it has examined a code base for security vulnerabilities, at one level or another, and found the code to pass the examination.

Of course, we all know that there are wheels within wheels and new threats will continue to emerge that were unsuspected before, but still, this might be better than nothing, if promulgated with appropriate caveats.

Finally, to what extent is programmatic code examination for security evaluation available or feasible or at least an active area of research?

Carl Mitchell December 3, 2020 12:10 PM

Open Source does not imply security. Closed source implies that security is unverifiable. So I’d say Open Source is better for secure systems, but it’s a “necessary but not sufficient” condition.

Kurt Seifried December 3, 2020 12:14 PM

We talk about this a LOT on the OpenSourceSecurity Podcast (http://opensourcesecuritypodcast.com/). the TL;DR: is that at least with OpenSource

  1. people can audit it
  2. when problems are found they CAN be fixed
  3. those fixes can be rapidly incorporated in the upstream
  4. even if upstream doesn’t allow this you can at least publish the fix, attach it to the CVE entry, etc.

With Closed Source those above options generally don’t exist (ignoring exotic things like companies providing binary micro patches for Windows, etc.).

metaschima December 3, 2020 1:05 PM

I am a Linux user and have been a user since 2003. Certainly, open source software is not inherently secure. How secure it is depends on a lot of factors, the language, the number of developers, the coding policies, how active the community is, the bug tracking ability and how respond to bugs, etc. Overall I would much rather use open source software than closed source because at least I can review the code myself if there is something that concerns me, and even submit a patch. With closed source software it’s kind of a gamble. You don’t know what the source code looks like, what coding policies they have, how good and active the developers are in continuously reviewing the code for bugs (which probably doesn’t even happen, they mainly respond to obvious bugs that people report). Certainly for cryptography I would never ever use closed source and I always review the code myself to try and make sure it’s doing what it is supposed to.

Clive Robinson December 3, 2020 1:28 PM

@ ALL,

Whilst,

Open Source Does Not Equal Secure

This is equally true,

Closed Source Does Not Equal Secure

But the argument goes further, that is,

What is secure today, will become insecure fairly quickly

There are a couple of reasons for this,

1, Code nolonger works in issolation

2, You can only test what you know to test for

Appart from specialised software, when you write code it is to run in a computing stack somewhere above the ISA and at or below the user interface. In order to function it generaly has to use the system kernel to access input data and pass on output data. In most cases the kernel and IO mechanisms are larger than they need be. In part this is due to the mechanisms trying to be all things to all men.

Worse most code calls libraries of unknown and changable provenance programers try to avoid using “static linking” for various reasons and go for “dynamic linking” which means that the code you are writing can have code it uses swapped out from underneath it and replaced with something that now makes your “tested secure” code now “untested and of questionable security”

Similar applies to device drivers that lurk underneath the standard kernel interfaces your code uses.

Thus the lack of issolation leaves any code you write and test vulnerable beyond your control.

But even if you are puting real effort into security testing, “What do you test for?”

Well the answer spins on “What you know to test for”… That is you can only test for known instances and classes of vulnerability.

Thus you have the “unknown, knowns” and “unknown, unknowns” vulnerabilities to worry about. Hopefully an “unknown instance of a known class” of vulnerability will be stopped by the measures you put in place for the class of attack. However an “unknown instance in an unknown class” you can not test for, the best you can hope for is that the general coding rules and mechanisms you employ whilst writting the code will stop a new instance in a new class working against your code.

Essentially the best you can do is employ good design methods and practices and cross your fingers.

However there is a known class of attack you can do little or nothing about, nor can the OS or anything above the CPU ISA level in by far the majority of systems. And that is attacks exploiting vulnarabilities in the CPU microcode and to a lesser extent the Register Transfer Logic and down the stack in the MMU and DMA levels or even the memory layer it’s self. All these attacks alow you to change the contents in the system memory and the CPU can not detect them.

Thus “secure code loding” methods using “signed binaries” and the like in effect only check the code is valid whilst loading. After that the memory is open to attack that may not be detected even with “word and sector tagging” because it’s not realy the CPU as trusted that is modifying the memory.

Thus protection mechanisms built into languages such as Rust to stop say buffer overflows can be negated fairly easily…

Security is not easy and it goes way beyond just the design and coding methodology of the high level source code. Put simply the further up the computing stack the code is for the less easy it is to make secure. Similarly the more high level the language the harder it is to make securevas well. But at the end of the day in most cases even if you put best effort into it your code you test as secure will quickly rot to an insecure state…

Yes there are ways to add extra security to cover these issues as I’ve indicated in the past, but the price you pay is slower running code on any given piece of hardware.

MarkH December 3, 2020 1:48 PM

A couple of weeks ago, I saw a piece on Wired about the gap between the attention needed by open-source projects, and the available people power:

Studies suggest that about 9.5 percent of all open source code is abandoned, and a quarter is probably close to being so.

When the TrueCrypt shutdown announcement appeared a few years ago, commenters on this blog made their usual suggestions of sinister government interference, and darkly hinted that it was backdoored anyway.

The most thoughtful commentary I saw (I think it was on the blog of some notable techie) offered the explanation I thought by far the most likely: the anonymous toiler keeping TrueCrypt up-to-date had gotten worn out, and wanted to return to a more balanced life.

Abandoned open-source projects present a special type of vulnerability to malicious actors. Again from the Wired article:

Two years ago, the pseudonymous coder right9ctrl took over a piece of open source code that was used by bitcoin firms—and then rewrote it to try to steal cryptocurrency.

That the attack was caught perhaps owes something to the large sums of money involved, and the attention this entails.

How many abandoned (or nearly so) open-source tools have large numbers of users? What opportunities to these present to attackers?

John Jakes December 3, 2020 1:59 PM

Bruce is right on with this and it’s no laughing matter. Everyone thinks open source means secure; they couldn’t be more mistaken. Even completely open source tables have been found to contain Trojan horses :

https://bit.ly/37wBCih

Clive Robinson December 3, 2020 2:23 PM

@ MarkH,

… had gotten worn out, and wanted to return to a more balanced life.

Whilst “lifestyle issues” are a reason there is another one “employment”.

Some people write Open Source projects to “show case” their skills when their C.V. Is a little on the empty side (ie recently graduated or just started with a new programing language etc). They get a job, and the need for the project fades. But also some employers have “We Own all your IP” clauses in employment contracts and similar, thus there is no benifit to writing out side employer code, because they are only going to litigate you into bankruptcy or similar.

Whilst the latter issue is starting to change through FOSS there is still enough of it around to cause issues.

Some years ago now a now previous employer got taken over whilst I was still working there. They issued a new contract of employment that basically said that all inventions or designs or code thst I had produced that was not owned by another entity now belonged to them… I told them it was not legal and there attitude was “tough” and we parted ways acrimoniously. They tried it on and it cost me money to hire a lawyer to send them a cease and disist or be sued into hell letter. They decided to hardball untill the 14day notice before litigation arived. Not as they expected for a civil breach of contract case in a crown court but for a Judicial Review in the high court over how they had forced the change of contract through.

I guess their lawyers pointed out not only did they not have a hope in hell of winning, but any decision the court made would be made for all employees… So they bottled, then I started “constructive dismissal” proceadings agsinst them which they lost and had to pay the costs plus damages. Hopefully they learnt a lesson, but it taught me one as well, even though I had a letter signed by a director of the company before it was taken over that they had no claim against anything I did outside of work, it would not stop people trying it on. Which gave rise to the second lesson, don’t put your IP anywhere visable so a disreputable employer might make a grab for it, because it can cost you big time one way or another and for most people there is little you can do about it.

xcv December 3, 2020 3:59 PM

@O.P.

First, simply publishing the code does not automatically mean that people will examine it for security flaws.

People who depend on software code for something mission-critical often do find the time and money to audit the code and examine it for security flaws.

Security researchers are fickle and busy people. They do not have the time to examine every piece of source code that is published.

This is particularly true of “high-impact” security issues reported in the mass media — where researchers are subject to “gag orders” and “non-disclosure agreements” and so on and so forth.

So while opening up source code is a good thing, it is not a guarantee of security. I could name a dozen open source security libraries that no one has ever heard of, and no one has ever evaluated.

Quite true. Publishing source code and making it available for free access does not automatically make it secure. This is only a convenient way to allow or invite others to participate or cooperate — assuming that the main goals of any particular software project involve incorporating security into the design from the get-go, which is not always a safe assumption to make.

On the other hand, the security code in Linux has been looked at by a lot of very good security engineers.

Linus Torvalds curses and swears a lot — and professional security researchers are not always interested in contributing the best security principles and practices to the design of an open-source Linux kernel either.

I “like” the idea of NSA’s SELinux — but there is a lot of emphasis on administration and “management versus labor” and other issues that do not relate directly to the general security of a computer operating system.

vas pup December 3, 2020 5:14 PM

@Clive said:
“However an “unknown instance in an unknown class” you can not test for, the best you can hope for is that the general coding rules and mechanisms you employ whilst writing the code will stop a new instance in a new class working against your code.”
Clive, I just recall when AI made a move in GO and won against the best human in this game, and that move nobody could predict at all.
Maybe AI could in a future at least generate those instance you are pointing to or even test them.

xcv December 3, 2020 8:39 PM

Open source means that the code is available for security evaluation, not that it necessarily has been evaluated by anyone. This is an important distinction.

A very important distinction indeed. If I publish code and make it available freely, it is with no guarantees — use at your own risk, and do your own security analysis on it for your own purposes.

I see a lot of tools out there that might be helpful to do some automated checking to catch common security pitfalls.

I am not really all that familiar with all these source code and runtime analysis “tools” available. There’s a strong “Six Sigma” (6σ) flavor to some of these tools, sometimes called “Five Nines” (99.999%) associated with the “Nive-to-Fivers” or “Top Hatters” motorcycle club — with many associates who work white-collar day jobs in the government, the U.S. Marshals kidnapping gangs, off-beat military police brotherhoods, etc.

Six Sigma above the mean is actually at a quantile of Nine Nines, because

∫[-∞,6] exp(-x^2/2)/√(2π) dx = 0.9999999990134124~

but there is a certain sense among these government worker brotherhoods in which the math and logic are irrelevant.

Dave December 3, 2020 11:24 PM

Open Source does not imply security. Closed source implies that security is unverifiable.

Closed source typically means commercial software, which means that the owner is able to pay experts to review/pen-test it for security vulns. Some of the most secure code I know of is very much closed source, and a quite a bit of money changed hands to in the process of (trying to) ensure that it’s secure.

With OSS in contrast it only gets audited if (a) someone randomly decides to or (b) it gets publicly and massively compromised, incentivising someone to have a look at it until they get bored and move on to something more interesting.

xcv December 3, 2020 11:42 PM

@Dave

Some of the most secure code I know of is very much closed source, and a quite a bit of money changed hands to in the process of (trying to) ensure that it’s secure.

The fine print and legal disclaimers do nothing to convince me of that.

With OSS in contrast it only gets audited if (a) someone randomly decides to or (b) it gets publicly and massively compromised, incentivising someone to have a look at it until they get bored and move on to something more interesting.

Both cases can lead to good results. In case (a) auditing by outsiders is not even possible in the closed source world. In case (b), the availability of source code often does enable the rapid development and deployment of fixes.

Moral and ethical qualms aside, a policy of “full disclosure” of vulnerabilities, with exploit code, without NDAa and gag orders, does yield the best long-term incentives for hardening a software system with respect to security.

Clive Robinson December 4, 2020 12:09 AM

@ vas pup,

Maybe AI could in a future at least generate those instance you are pointing to or even test them.

Whilst I’ve no doubt AI will be able to produce new instances of attack in known classes, I’m not so sure having it do so is of much use to a defender, other than to see if their current known class protection rules are sufficient to catch it.

As for AI coming up with new classes of attack, of that I’m less certain. Whilst an AI system might be able to derive featutes from multiple known classes and come up with comonalities that indicate a deeper class which can then be used to find “the missing gaps” between the known classes –esentially what it did with GO– I’m doubtful that it could find original non derived classes of attack, because that would require a different type of AI system than we currently have.

But let’s assume for the moment AI can come up with original non derived classes of attack, will it actually be of any use to defenders?

Firstly AI is known for the inability to show how it got from point A to point B. This is important because as has been found in the past AI results can be dependent on idiosyncratic features that are effectively unique to a system under test, with the features not being in other similar systems[1]. That is if you have two hand made fences with chinks in them running parallel to each other, it will give you the coordinates of where to stand to see through, but as no pair of fences is the same it’s not particularly helpfull[2].

Secondly is a more meta-physics question, of just how many classes of attack are there? If we assume not unreasonably it is quite large, the fact that another new class has been found may not be of much use. Nor might the way it was found. That is an AI system could sit there and say find a thousand new classes of attack but they might be just a drop in the ocean of new classes to be found.

What defenders need is not realy new classes of attack to defend against but new ways to combat large numbers of classes of attack by simple rules[3] that allow what might be risky activities to be carried out more safely.

Now it could be argued that you could have one type of AI system finding new classes of attack that then provide the data set for another type of AI system that analyses the attacks for new protection rules. But a little bit of thinking will tell you it’s not going to get you very far as all the second system is doing is effectively inverting the first imprecisely. To be usefull you would need a large number of totaly independent AI systems finding new classes of attack by entirely different “hidden rule sets” that the second system can analyse for deeper more fundemental rule sets that might be of use. But you still end up against the old “Is it the cheese you put in the trap, the trap, or the place you put the trap or some combination thereof that attracts the mouse?” dilemma.

[1] One experiment was to find a high quality filter that used less gates that could be used in designs. The algorithm did indeed find a more efficient way to make a filter, but it was found to be based on features that were variable from chip instance to chip instance thus of no real use.

[2] It’s not hard to come up with an algorithm to do this your self. In essence you make a random guess then test it and keep looping round untill you find a point, output the coordinates of the sight line and stop. It’s clear that it is not of real use. Thus a vulnerability an AI might find is overly specific to the computer system it is modeling and thus the results not usefull on very similar systems.

[3] Lets assume that an AI system finds a large number of new classes of attack with little or no commonality. All it realy ends up telling you is that “all connected systems are vulnerable”. We already know how to mitigate this it’s “don’t connect your systems to communications”… But it’s not helpfull if your business logic is dependent on communications. As we know the solution might be accept there is no static defence and go to some form of active defence that looks for anomalies or similar in the traffic or traffic patterns.

David December 4, 2020 1:55 AM

“Closed source typically means commercial software, which means that the owner is able to pay experts to review/pen-test it for security vulns. Some of the most secure code I know of is very much closed source, and a quite a bit of money changed hands to in the process of (trying to) ensure that it’s secure.”
Outside banking software this is very unusual.
Most commercial software is rushed out to meet marketing deadlines by inexperienced teams, with no budget or time to audit and test.
Then if bugs are found they might be fixed in the next paid for release, rarely pushed to older versions.

Petre Peter December 4, 2020 6:44 AM

I was wondering if you’d trust your bank if you knew it was using open source software.

Goat December 4, 2020 8:08 AM

@Dave Free software has been working out great business models, beside most proprietary software contains atleast some malfeatures, these often make the software insecure, also it is mostly not in their best interest(for profit) to invest in security,

@Kurt what a pleasent surprise I have heard your episodes and the heading reminded me of your insights on open source security.. Something to complain.. You never mention how proprietary anti-features may jeopardise security and about how free/libre may affect such things(not just opensource)

Miksa December 4, 2020 8:39 AM

I believe the the open source world needs a service for bookkeeping audits. I might take a look at a piece of code, but is it worth it if I assume that someone more qualified has already done it. But there’s usually no way to know if this has been done. Then if I do check the code and consider it good how do I tell the rest of the world? And specify that my audit only concerns this specific version of the code. Usually we know of these audits only if something bad is found.

Bookkeeping should probably be part of Github and other code repositories. When someone reads through a function or some other piece of code and consider it good they should be able to brand it with their opinion. When enough people with enough credentials has done the same the repository can mark it good or green. If the code is modified it’s marked to need re-auditing.

If you audit code, find a vulnerability and do a pull request for a fix. If the upstream then accepts the fix and publishes it, with possibly even a CVE this should automatically raise your auditing credentials.

Clive Robinson December 4, 2020 9:26 AM

@ Petre Peter,

I was wondering if you’d trust your bank if you knew it was using open source software.

Well, we know that their hardware is mostly based on CPU’s that have exploitable faults.

So no mater how good or bad the software security is it rests on shaky foundations…

So we already have to “trust” that which we know is at best less secure than is desired.

MikeA December 4, 2020 11:21 AM

@Clive on AI software testing and “new classes” of errors.

I don’t doubt that AI would find a correlation between “Committed after 4:30PM on a Friday” and “exhibits dubious behavior”

Also, too many times (OK, maybe 3 or 4, but that was too many) when I found and reported a bug associated with a particular coding idiom, the maintainer would fix the exact instance I reported, but had no interest in fixing the other instances of the same general bug. I recall (over a decade ago) finding a semantic variant of “patch” (would pattern match on the parse tree, IIRC) that would have made finding/fixing such things easier and more reliable, but I was on my way out the door and doubt my organization followed up.

Clive Robinson December 4, 2020 1:58 PM

@ MikeA,

when I found and reported a bug associated with a particular coding idiom, the maintainer would fix the exact instance I reported, but had no interest in fixing the other instances of the same general bug.

It’s not uncommon, I used to find “login buffer overflows” and similar on *nix boxes. One particular version for the ICL Perq called PNX (pronounced as Pee-nix and yes there were jokes anout how a “unix only had a pee-nix” etc) had a buffer overflow issue on login that dropped you right into a root shell. When that had been fixed… I then found another way in to a root shell. Because the system also had a graphics tablet and I found you could likewise by moving things around a certain way cause a buffer overflow that dropped you into a root shell. I did not bother reporting the second fault, because there was a nice game of Pac-Man on the system and due to the SysAdmin being mean you could only play it as root… The Sysadmin caught me playing it one lunch time, and because I would not tell him how I got root, he locked my account on the machines… Imagine his surprise when he found me loged into my account busy finishing off a program to talk to the IEEE bus, but as he quickly found the password file should still he have kept me locked out… He again asked how I’d got in and I just smiled and said “It helps to have the chops to know good IO hacks”… Not sure if he ever found out how, I moved on to a high end hardware development company a couple of weeks later.

vas pup December 4, 2020 3:11 PM

@Clive Robinson • December 4, 2020 12:09 AM
Thank you very much!
Many interesting points.
In particular:”Firstly AI is known for the inability to show how it got from point A to point B.” That should be resolved at least on the testing stage of AI itself.

xcv December 4, 2020 11:18 PM

@ Kurt Seifried

We talk about this a LOT on the OpenSourceSecurity Podcast (http://opensourcesecuritypodcast.com/). the TL;DR: is that at least with OpenSource

  1. people can audit it
  2. when problems are found they CAN be fixed
  3. those fixes can be rapidly incorporated in the upstream
  4. even if upstream doesn’t allow this you can at least publish the fix, attach it to the CVE entry, etc.

With all due respect, security is a lot of work, and simply making something open source does not make it secure.

JonKnowsNothing December 5, 2020 1:23 AM

@All

re:
a) people can audit it
b) problems are found they CAN be fixed

As several folks have pointed out there is a lot between a) and b) and a load of assumptions in between.

1, Just because someone checks it out, does not mean there are no flaws.

2, Just because they check it out and find a flaw, does not mean they found all flaws.

3, The presumption is that the person doing the checking knows a flaw when s/he/ai sees one is a big presumption. Open source is open and depending on the project, pretty much anyone can change, insert and update stuff. It doesn’t mean it’s good stuff or without more flaws or that person is even knowledgeable on the topic to know where something went pear shaped as long as it got the required approvals.

4, Just because it was verified OK today doesn’t mean it didn’t get trashed later. (regression test)

5, If it’s not your life’s work, there’s no expectation of continuing vigilance.

6, Even hot shot programmers get it wrong: Heart is still Bleeding..

ht tps://en.wikipedia.org/wiki/Heartbleed
(url fractured to prevent autorun)

Clive Robinson December 5, 2020 6:42 AM

@ Denton Scratch,

That is not what it [AI] always meant.

I remember a time when there was the holy grail search of “Hard AI” and the “Soft AI” of expert systems and fuzzy logic.

I suspect the reason of,

Modern AI research has appparently abandoned the “intelligence” bit, and succumbed to the pressure to produce decision systems that are commercially deployable.

Has an underlying cause, and it’s not for good reasons but bad.

As we know “outsourcing” of Government functions has three purposes,

1, It puts in arms length deniability.
2, It avoids oversight due to “commercial confidentiality”.
3, It put’s large sums of money in certain peoples back pockets that some of gets back in decision-makers pockets (nest featheing and the like).

None of which are desirable to the ordanary voter and tax payer.

The same can be said of “Commercialy Deployable AI” all it realy does is put an “unexplainable computer program” in as a “firebreak” to “blaim and responsability” with the old “the computer says” combined with the “only following orders” argument. Likewise it puts lots of cash in certain pockets.

Thus the real purpose is “carry on as before” but this time have a “scapegoat” that can be easily sacrificed at no pain just further profit as it is just a box of electronic junk and algorithmic junk…

P.S. The fact that I’m getting more cynical to a positive power law of time, does not mean it is any less the truth of the considered evidence. Possibly just hard earned wisdom 😉

Clive Robinson December 5, 2020 6:59 AM

@ Michael Salmon,

… reminded me of Richard Thompson’s speech on trusting trust, if you can’t trust your tools then you can’t trust the result.

You are not going to like this…

But ultimately you can not trust your tools if someone gets underneath them in a lower layer in the computing stack.

That is it matters not a jot what your software does if I can change the contents of locations in memory it uses…

Yes there are ways you can limit it like individual memory page encryption but that just makes an attackers job harder not impossible as the page data has to be stored somewhere and that includes it’s symmetrical encryption key.

Likewise you can use multiple systems and use a “voting circuit” on the ouput.

At the end of the day the person who can attack from the lowest layer “atomically” is going to win. Thus you have to stop any attacks being atomic so you can see the changes of memmory etc in progress and catch them.

JonKnowsNothing December 5, 2020 8:25 AM

@Clive @Michael Salmon @All

re: Trust the tools… nooo….

Eons back in the pre-COVID-19 mists of time, I had similar conversations with superiors about the “tools” in use. There was much open source stuff because companies and startups do not like spending $$$ on per-seat licenses or $$$$ for annual site licenses, it rather dents the budget quickly.

They explained the whys and wherefores of things, I remember asking about the compiler. If they were concerned about X, Y, Z what about the compilers in use?

It turned out that these were no better than any other tools in use, just less visible. Every time I ran a compile I thought about what all the things I didn’t know that was happening. When they ran the optimizer, the same. Rarely did any of my colleagues bother with compiler thrown non-critical errors.

Better role-models did. I followed the example of better role-models. Did it make the code any safer? I dunno, I would like to think so but I don’t know so.

Pud December 5, 2020 8:41 AM

A little disclosure is in order when it comes to Github’s research into the security of open source projects. Since 2018, Github has been owned by Microsoft, whose entire business model revolves around 1. cornering the market 2. forcing proprietary code upon users 3. threatening users with legal action for reverse engineering their code.

Clive Robinson December 5, 2020 9:08 AM

@ JonKnowsNothing, ALL,

Did it make the code any safer? I dunno, I would like to think so but I don’t know so.

That is the same dilemma as “Defence Spending”.

You only get to know when you’ve spent to little because you don’t get attacked. You don’t ever get to find out what the “minimum defence spend” is to deter attackers (assuming they are rational actors).

But Defence Spending is “on going” that is “you can change the now” if you think it’s required. You can not realy do that with software or hardware once it is shipped.

So the temptation of a conscientious employee is to “over engineer” on the principle “attacks improve with time”. However this is seen by many as in conflict with the duty of managment to shareholders…

That is there is a direct conflict between the long term thinking of the conscientious worker and the very short term thinking of managment.

Which viewpoint is correct, arguably short term thinking always causes “race for the bottom” behaviours and much worse including instability (which is strongly desired by the finance industry to exploit which gave us BC1 and BC2 etc). Not sure of any real disadvantages of the longterm viewpoint other than an increase in price for better quality goods. As all the other points people usually give have dentrimental flip sides that are in the longerterm more harmfull.

Clive Robinson December 5, 2020 9:19 AM

Opps,

“think ahead error” in my above,

“You only get to know when you’ve spent to little because you don’t get attacked. ”

The “don’t” should be “do”, and I deleted the following “when you don’t get attacked you don’t know if you’ve spent to much” and replaced it with the less trite minimum spend.

Which also alows for “terrorists” that care not how much you spend as the current crop demonstrates generally do not behave as rational actors.

Erdem Memisyazici December 5, 2020 10:57 AM

In fact a lot of companies encourage employees to contribute to open source to do just that.

Anders December 6, 2020 4:29 PM

@ALL

This suits here nicely. Dynamic analysis of Docker
containers and what they find out.

hxxps://prevasio.com/static/Red_Kangaroo.pdf

Andreas December 7, 2020 10:58 AM

This is something i have tried to make people understand for a long time.

Open source does not automatically make something more secure.
But it gives it the potential to be more secure.

Who? December 8, 2020 1:43 PM

The worst offenders, but not the only ones, are web browsers. In most cases, huge pieces of open source code with hundreds of dependencies on third party projects. Unauditable in any meaningful way. Full backwards compatibility for both HTML and Javascript (ECMAscript) is a nightmare; in my humble opinion Tim Berners-Lee was wrong when he suggested backwards compatibility as a requirement for a web browser. Modern browsers should focus on HTML5 and the most recent technologies instead of emulating bug-by-bug historic releases of these languages.

If we want secure software we need educating developers on

  1. Writing secure code, they should have the resources available to verify themselves the software they write, instead of depending on feedback coming from external auditors; there must be a security culture in the core of the open source community.
  2. Making software simple and functional instead of feature rich, to improve both security, auditability and stability. A web browser that talks the most recent HTML5, CSS and ECMAscript should require no more than ten or twenty megabytes. Remember the old Mosaic and Navigator browsers? They were the way to go.
  3. Simplify standards.

The World Wide Web Consortium did the right thing with HTML5, sadly it means nothing if web browsers are required to support all previous releases of the markup language and other components.

Of course new technologies (e.g. video transmission over Internet) must be implemented from time to time. Simplicity does not mean dropping evolution and improvements, but I miss the way first browsers worked.

A Mosaic or Netscape Navigator upgraded browser (with audited code, and supporting HTML5, CSS 2.1 and ECMAscript 11th edition would be a great platform for the development of secure web browsers.

The same can be said about mostly any open source (and closed source too) component we can think of. Right now software is again becoming unmanageable, just as it was in the nineties. We are making the same mistakes, but this time there are huge adversaries around, from criminal organisations to governments, ready to exploit any mistake coded in the software we use on a daily basis.

Who? December 8, 2020 2:00 PM

As open source developer, I think full availability of source code is a requirement for a secure software component. As Carl Mitchell said on a previous post, open source is a necessary, even if not sufficient, condition to consider a software component secure. But trusting only on the community to check our software is a mistake, even if we get great feedback from them.

On the other hand, do not believe a binary necessarily matches its source code. A binary distribution of an open source package may include code that has not been published on the public source code repository. We do it every day when testing experimental features in our snapshots, code that has not been added [yet] to our public repositories.

Clive Robinson December 8, 2020 4:27 PM

@ Who?,

Modern browsers should focus on HTML5 and the most recent technologies instead of emulating bug-by-bug historic releases of these languages.

Err no your assumptions are showing amongst other things.

There is nothing wrong in supporting historic standards in fact it’s a requirment for “web continuity” of historic sites/pages (and also a legal one in some countries).

Whilst I agree “emulating bug-by-bug” is a bad idea this should not be conflated with dealing “historic releases”.

Further many of those “bugs” were deliberate policy by large software companies trying to kill off competition via “embrace and extend” they put in deliberate incompatibilities that favoured their browser when working with their server products or worse deliberatly exploited aspects of the competitions products that were standards compliant.

But there is a greater security danger in “should focus on HTML5 and the most recent technologies”. The HTML 5 standard is a very bad idea in the main, various “funding Corps” have encoraged inclusion of all sorts of potential nasties, that most definately should not be in there, but are. The W3C should have known better, but either they were to incompetent to be issuing standards or they thought acquiescing for paltry fiscal reasons to be more important. Which indicated to many that the W3C had become “captured and corrupted” by the major browser vendors Apple, Google, Mozilla, and Microsoft. A point confirmed back at the end of May last year when the W3C ceded authority over the HTML and DOM standards to WHATWG and it’s “Living Standard”…

So in effect what was the “HTML 5 Standard” does not realy exist any more, just the “moving target” of the Living Standard. That by the way strongly pushes “backwards compatability in browsers” which means that “emulating bug-by-bug historic releases” is one of the major aims…

Who? December 9, 2020 10:50 AM

@ Clive Robinson

And your assumptions are showing that an incredible convoluted technology, like current browsers, is better than a simple one.

I agree, HTML5 is now a moving target. It is bad, in the same way is bad OS X and ⸺specially⸺ Windows 10 are moving targets these days. But it is too a subset of earlier HTML specifications, a subset that removes a lot of useless features while preserving the important ones and ⸺I hope⸺ will tend to stabilise over time. It is a clean subset, and a stricter one as you can see parsing any HTML5 code.

Most of these old web pages either do not exist anymore or do not work as expected in current browsers (think on these ones that heavily depend on Java applets or flash intros). While here, archive.org is doing a poor job preserving that knowledge; sometimes archive.org is useful when downloading an old firmware update or manual for our devices, but in most cases it just does not work as intended.

I am all for removing these inherited pop-up windows, frames, non-standard extensions like Java applets and Flash and making things as simple as possible.

I agree with you, W3C is in the wrong hangs and moving standards to the wrong working groups. But in my humble opinion it is a different matter, a political, non technical, one.

c1ue December 16, 2020 4:31 PM

This isn’t rocket science.
Paul Vixie – one of the early advocates of open source as being secure – publicly changed his mind at least 2 or 3 years ago.
His reason?
When he first advocated open source as secure – there were 10 million lines of code and there was a reasonable likelihood that any line would be looked at.
Today, there are 10 billion lines of code and the vast majority will never even be read once by a security reviewer.

P.J. V December 24, 2020 2:00 PM

Reminds me of an economics paper I read about the “efficient markets hypothesis”. There is an old joke about the assistant professor who, when walking with a full professor, reaches down for the $100 bill he sees on the sidewalk. But he is held back by his senior colleague, who points out that if the $100 bill were real, it would have been picked up already.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.