this post was submitted on 29 Mar 2024
671 points (99.0% liked)

Technology

59217 readers
2773 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer's name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.

He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 61 points 7 months ago (5 children)

You're making a logical fallacy called affirming the consequent where you're assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would've been caught.

Suppose the bad actor had not been sloppy; it would still be entirely possible that the backdoor gets identified and fixed during a security audit performed by an enterprise grade Linux distribution.

In this case it was caught especially early because the bad actor did not cover their tracks very well, but now that that has occurred, it cannot necessarily be proven one way or the other whether the backdoor would have been caught by other means.

[–] [email protected] 22 points 7 months ago (1 children)

Also they are counting the hits and ignoring the misses. They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized. So people don't typically try to sneak back doors in. Also, backdoors have been discovered in an amazing amount of closed source projects where no one was even able to review the code.

[–] [email protected] 8 points 7 months ago* (last edited 7 months ago) (2 children)

They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized.

Everyone assumes what you have stated, but how often does it actually happen?

How many people, and how often, and how rigorous, are code reviews actually done? Especially with large volume projects?

[–] [email protected] 13 points 7 months ago (1 children)

Depends on the project, but for a lot of projects code review is mandatory before merging. For XZ the sole maintainer can do whatever they want.

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago)

Depends on the project, but for a lot of projects code review is mandatory before merging. For XZ the sole maintainer can do whatever they want.

I've done plenty of code reviews in my time, and I know one thing, the more busy you are, the faster you go through code reviews, and the more chance things can be missed.

I would hope that for the real serious shit (like security) the code reviews are always thorough and complete, but I know my fellow coding brethren, and we all know that's not always the case. Time is a precious resource, and managers don't always give you the time you need to do the job right.

Personally I use a distro backed indirectly by a corporation and hope that each release gets the thorough review that it needs, but human nature is always a factor in these things as well, and honestly, there are times when everyone thinks everyone else is doing a certain task, and the task falls between the cracks.

[–] [email protected] 1 points 7 months ago

Reviewing the code was irrelevant in this case because the back door only existed in the binaries.

[–] [email protected] 11 points 7 months ago* (last edited 7 months ago) (1 children)

It's maybe possible, but perhaps even unlikely still.

Overwhelmingly thorough security review is time consuming and expensive. It's also not perfect, as evidenced by just how many security issues accidentally live long enough to land Even in enterprise releases. That's even without a bad actor trying to obfuscate the changes. I think this general approach had several aspects that would made it likely to pass scrutiny:

  • It was in XZ, which was likely not perceived as a security critical library. A security person would recognize any thing as potentially security critical, but they don't always have the resources and so are directed to focus on obviously security related and historically security incident magnets.
  • it was carried out by someone who spent years building up an innocuous reputation. Investigation may even show previous "test samples" to be malicious but not caught, or else it was a red herring to get people used to random test samples getting placed in the project.
  • The only "source code" he touched was "just build scripts". Even during a security audit, build shell scripts are likely going to be ignored, they are just build scripts and maybe you run some tests on all scripts, but those tests aren't going to catch this sort of misbehavior.
  • The actual runtime malicious code was delivered as portions of ostensibly throw away test sample xz files. The malicious code is applied by binary patch of the build output. A security audit won't be thinking too hard about a sea of binary files that are just throwaway samples as fodder for test.

So while I see the point about logical fallacy about it accidentally not getting far enough to see if the enterprise release process would have caught it, I think we know track records well enough to deem this approach likely to get through. Now that it has been caught, I could see some changes that may mitigate this in the future. Like package build scripts deleting all test samples and skipping tests when building for release, as well as more broad scrutiny.

There's also the reality that a lot of critical applications deem themselves too cool to settle for "old crusty enterprise distributions". They think that approach is antiquated and living on the edge is better. Admittedly I doubt theyd go as far as arch, tumbleweed, or rawhide, but this one could have easily made it to Debian testing, fedora release, or an Ubuntu release.

[–] [email protected] 2 points 7 months ago (1 children)

I think we know track records well enough to deem this approach likely to get through.

That was my concern, and why I brought up my point.

Human nature, especially when volunteer work versus paid work is being done, as well as someone who purposely over the long-term is trying to be devious, could be a potent combination for disaster.

I still wonder if there should be an actual open source project that does nothing but security audits of all other open source projects, hence my original question as an opener to a conversation that I never got to elaborate on because I was getting attacked almost immediately by people who are very sensitive about bringing any criticisms/concerns about open source out in the open.

[–] [email protected] 4 points 7 months ago (1 children)

The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

You can look at a closed source vendor like Cisco and see backdoors, generally left over from developer access, yet open for abuse. The nature of those is so blatantly obvious any open review would have spotted it instantly, yet there it was

With this, you had a much more device obfuscated attack that probably would have passed through even serious security audits unnoticed, yet it was caught because someone was curious about a slight performance degradation and investigated. Having been in the closed source world,I can tell you that they never would have caught someone like this. Anyone even vaguely saying they wanted to spend some time investigating a session startup delay of half a second would be chastised for wasting time.

Further, open source projects are also the fodder for security researchers to build their resumes. Hard to prove your mettle without works, and catching vulnerabilities in OSS code is a popular feather in their cap.

It also implies that open source is strictly a volunteer affair. Most commercial applications of a Linux platform involve paid employees doing some enablement, and that differs place to place. There's of course red hat paying for security research, Google, Microsoft also. I know at least one company that distrusts everything and repeats a whole bunch of security audits, including paying external companies to audit open source code. I would wager that folks downstream of say centos stream or certain embedded platforms can feel pretty good about audits. Of course all bets are off when you go grab yarballs, npm, pip, etc.

[–] [email protected] 1 points 7 months ago (1 children)

The issue is that it implies that open source has a problem due to volunteers that is not found in closed source, which is not really the reality.

I (partially) disagree. Fundamentally, my belief is that someone who gets paid to do the work is more rigorous doing the work than someone who does it on a volunteer basis, a human nature thing. Granted, I'm speaking very generally, and what I stated is not always true, but still.

Also, corporations that write close source programs are much more legally adverse to being sued if their product fails (there's a reason why we're seeing so many corporations slapping in arbitration clauses into their agreements these days; risk-averse).

Open source projects tend to just be more careful about their code base not being tainted, and write in disclaimers ("As-is") to protect themselves legally for the failure of the product scenario, and call it a day (again, very generally speaking (I use Fedora specifically for a reason)).

And speaking of Fedora, I do agree with your point that some open source projects are actually done by paid coders. I just believe that's more of the outlier, than the norm, though. Some of that work is done by corporate employees, but still on a volunteer basis.

Not dismissing at all, I am thankful for corporations that actually spend time letting their employees do open source work, even if it's just for their own direct benefit, as it also benefits everyone else.

[–] [email protected] 6 points 7 months ago (1 children)

Having worked with closed source, whatever they project externally, internally they are generally lazy and do the bare minimum. If there is a security review, it might just be throwing it at something like bdba that just checks dependencies against cve. Maybe a tool like coverity or similar code analysis. That's about as far as a moderately careful closed source so goes. It is exceedingly rare for them to fund folks to endlessly fiddle with the code looking for vulnerabilities, and in my experience actively work to rationalize away bugs if possible, rather than allocating time to chasing root cause and fix.

There may be paragons of good closed source development, but there are certainly bad ones. Same with open source.

I also think most open source broadly is explicitly employee work nowadays. Not just hobbyist, except for certain niches.

[–] [email protected] 1 points 7 months ago (1 children)

internally they are generally lazy and do the bare minimum.

Day to day, and with a lazy manager who is not technically knowledgeable, I would agree, and they do existence in corporations.

But if you work for one who knows what they're doing and gets a mandate from their boss to make sure the code doesn't leave the corporation legally exposed, then not so much.

Also special events like Y2K also gets extra scrutiny for legal reasons way up and above the normal level scrutiny thing production code gets.

I've worked it both types throughout my career.

[–] [email protected] 2 points 7 months ago

The same argument can be made about open source, some projects are very carefully and festidously managed, and others not so much.

Main difference is with closed source, it's hard to know which sort of situation your are dealing with, and no option for an interested third party to come along and fix a problematic project.

[–] [email protected] 9 points 7 months ago (1 children)

Have those audits you allude to ever caught anything before it went live? Cuz this backdoor has been around for a month and RedHat is affected, too. Plus this was the single owner of a package who is implicitly trusted, it's not like it was a random contributor whose PRs would get reviewed.

The code being open source helps people track it down once they try to debug an issue (performance issue and crashes because in their setup the memory layout was not what the backdoor was expecting), that's true. But what actually triggered the investigation was the bug. After that it's just a matter of time to trace it back to the backdoor. You understimate reverse engineers. Or maybe I'm just spoiled.

How long until US bans code from developers with ties to CN/RU?

[–] [email protected] 5 points 7 months ago (1 children)

How long until US bans code from developers with ties to CN/RU?

That won't happen because it would effectively mean banning all FOS which isn't remotely practical.

[–] [email protected] 1 points 7 months ago (2 children)

How do you propose we meaningfully fix this issue? Hoping random people catch stuff doesn't count.

[–] [email protected] 2 points 7 months ago (1 children)

An open source project that does nothing but security audits on other open source projects?

[–] [email protected] 2 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

How do you interpret the reactions to that comment that you linked?

I ask in trying to understand how to interpret the comment accuracy/validity.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

That's a great question. No way to tell. It's freaking emoji.

A thumbs down could be displeasure of the product not being able to catch it, or it could be them not liking the comment because they think it's untrue.

A fuzzer might catch the crashes related to the memory layout? But its purpose is to look for vulns not malice.

The dude himself is legit tho, he probably owns OSS Fuzz

https://www.linkedin.com/in/jonathan-metzman-b8892688

https://security.googleblog.com/2021/03/fuzzing-java-in-oss-fuzz.html

[–] [email protected] 1 points 7 months ago

That’s a great question. No way to tell. It’s freaking emoji.

So many different ones too, not just up or down thumb emojis.

[–] [email protected] 1 points 7 months ago

In time it may become a trade-off between new (with associated features and speed) Vs tried and tested/secure.
To us now this sounds perverse, but remember that NASA generally use very old hardware because they can be more certain the various bugs & features have been found and documented. In NASA's case this is for reliability. I'll concede 'brute force' does add another dimension when applying this logic to security.

This may also become an AI arms race. Finding exploits is likely something AI could become very good at - but a better AI seeking to obfuscate?

[–] [email protected] 1 points 7 months ago (2 children)

You’re making a logical fallacy called affirming the consequent where you’re assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would’ve been caught.

No, I'm actually making that comment based on a career as a software developer, who has actually worked on a few open source projects before.

[–] [email protected] 5 points 7 months ago (1 children)

Your credentials don't fix the logical fallacy.

[–] [email protected] 1 points 7 months ago

Experience matters.

[–] [email protected] 4 points 7 months ago (1 children)
[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (1 children)

What, experience doesn't matter?

As Groucho Marx would say, "I can believe you, or my lying eyes".

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (1 children)

Experience doesn't matter if you don't read Wikipedia links given to you by random people :)

Edit:

I'm actually making that comment based on

has another tone to "in my experience as"

Didn't actually want to educate you, but I feel this edit won't hurt. Literally.

[–] [email protected] 0 points 7 months ago* (last edited 7 months ago) (1 children)

Experience doesn’t matter if you don’t read Wikipedia links given to you by random people :)

You're assuming I don't already know what's being discussed in the link (or have read the link), but disagree with how it's being applied to me.

Also, experience doesn't evaporate into the ether just because someone does not read a link. That's a fallacy for sure.

[–] [email protected] -2 points 7 months ago (1 children)

You're assuming I'm assuming.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

And you're assuming that I'm assuming that you're assuming. /s

Any particular reason why you're getting on my case?

[–] [email protected] 0 points 7 months ago (1 children)

Because the way this conversation started was a logical fallacy you weren't aware of. I like to teach.

You're dragging it too. I know now, you are not one to learn. But can you at least learn from this and move on?

[–] [email protected] 3 points 7 months ago (1 children)

Because the way this conversation started was a logical fallacy you weren’t aware of.

You're assuming I'm not aware of the point you're bringing up, again. I am, I'm disagreeing with you and how you're trying to apply it to me.

You’re dragging it too. I know now, you are not one to learn. But can you at least learn from this and move on?

Defending oneself is not 'dragging it too'. I'm literally replying to you stating that I am aware of the point you're stating repeatedly that I'm not aware of, but that I just disagree with you and how you're applying that point to me.

But instead of inquiring as to why I disagree, you're just repeating back more of the same thing.

Let's just agree to disagree on whether the point you're trying to make applies to me or not, and we both move on. It's such a trivial thing for you to keep hammering me on, it makes me wonder if you're just a conflict bot.

[–] [email protected] -3 points 7 months ago

Just the amount of text you wrote which I'll never read shows how you'll always try to prove your point, even if it was based on a fallacy to begin with. Just go and live a life my friend.

[–] [email protected] 1 points 7 months ago (1 children)
[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

That link doesn't prove whatever you think it's proving.

The open source ecosystem does not rely (exclusively) on project maintainers to ensure security. Security audits are also done by major enterprise-grade distribution providers like Red Hat Enterprise. There are other stakeholders in the community as well who have a vested interest in security, including users in military, government, finance, health care, and academic research, who will periodically audit open source code that they're using.

When those organizations do their audits, they will typically report issues they find through appropriate channels which may include maintainers, distributors, and the MITRE Corporation, depending on the nature of the issue. Then remedial actions will be taken that depend on the details of the situation.

In the worst case scenario if an issue exists in an open source project that has an unresponsive or unhelpful maintainer (which I assume is what you were suggesting by providing that link), then there are several possible courses of action:

  • Distribution providers will roll back the package to an earlier compatible version that doesn't have the vulnerability if possible
  • Someone will fork the project and patch the fix (if the license allows), and distribution providers will switch to the fork
  • In the worst case scenario if neither of the above are possible, distribution providers will purge the vulnerable package from their distributions along with any packages that transitively depend on it (this is almost never necessary except as a short-term measure, and even then is extremely rare)

The point being, the ecosystem is NOT strictly relying on the cooperation of package maintainers to ensure security. It's certainly helpful and makes everything go much smoother for everyone if they do cooperate, but the vulnerability can still be identified and remedied even if they don't cooperate.

As for the original link, I think the correct takeaway from that is: If you have a vested or commercial interest in ensuring that the open source packages you use are secure from day zero, then you should really consider ways to support the open source projects you depend on, either through monetary contributions or through reviews and code contributions.

And if there's something you don't like about that arrangement, then please consider paying for licenses on closed-source software which will provide you with the very reassuring "security by sticking your head in the sand", because absolutely no one outside the corporation has any opportunity to audit the security of the software that you're using.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

That link doesn’t prove whatever you think it’s proving.

That link strengthens my argument that we're assuming because it's open source that the code is less likely to have security issues because it's easier to be audited, when in truth it really just depends on the maintainer to do the proper level of effort or not, since it's volunteer work.

When someone suggested a level of effort to be put on code checked in to prevent security issues from happening, the maintainer pushed back, stating that they will decide what level of effort they'll put in, because they're doing the work on a volunteer basis.

[–] [email protected] 1 points 7 months ago (1 children)

And my rebuttal is three-fold:

  1. Security does not depend entirely on the maintainer, and there is recourse even in the worst case scenario of an uncooperative or malicious maintainer.

  2. The maintainer you quoted said he would be open to complying with requests if the requesters were willing to provide monetary support. You are intentionally misrepresenting their position.

  3. The alternative of closed source software doesn't actually protect you from security issues, it just makes it impossible for any users to know if the software has been compromised. For all you know, a closed source software product could be using one of the hypothetical compromised open source software project that you're so afraid of, and you would never actually know.

If you're willing to pay a license for a private corporation's closed source software so you get the pleasure of never being able to know your security posture, then why would you be unwilling to financially support open source developers so they have the resources they need to have the level of security that you'd like from them?

[–] [email protected] 1 points 7 months ago (1 children)

You are intentionally misrepresenting their position.

No I'm not. Or you're assuming my position incorrectly.

[–] [email protected] 0 points 7 months ago (1 children)

You're either intentionally misrepresenting the post or you failed to understand them correctly. I'll let you take your pick for which is less embarrassing for you.

[–] [email protected] 1 points 7 months ago (1 children)

You’re either intentionally misrepresenting the post or you failed to understand them correctly.

You're incorrectly seeing more into what I'm saying than I'm actually saying, probably because you are very invested in defending Linux, and interpret what I'm saying as an attack on Linux.

For what its worth, I'm not attacking Linux. I use Linux as my daily driver (Fedora/KDE).

[–] [email protected] 0 points 7 months ago (1 children)

The key sentence in the post you linked which constituted more than 50% of the words being stated by the poster and yet you somehow conveniently missed which completely negates the whole narrative that you're trying to promote:

Speaking as an open source maintainer, if a tech company would like to pay me to do ~anything for my open source project, we can sit down and talk about my rates.

Which means this person is NOT simply a volunteer as you insinuated here:

When someone suggested a level of effort to be put on code checked in to prevent security issues from happening, the maintainer pushed back, stating that they will decide what level of effort they'll put in, because they're doing the work on a volunteer basis.

but in fact is available to be paid a fair rate for the labor they perform. In fact your entire description of the post is mischaracterizing what is being said in the post.

I don't know how you could have accidentally missed or misinterpreted one of the two sentences being said by the poster, and the longer of the two sentences at that. It was also the first sentence in the poster's statement. It seems more likely to me that you missed that on purpose rather than by accident. Maybe you're just so eager to find evidence to match your narrative that your brain registered the entire point of the post incorrectly. Allow me to reframe what's being said to simplify the matter:

As a self-employed contractor, if you demand that I perform free labor for you, I will decline that request.

Now just add a much more frustrated tone to the above and you get the post you linked.

[–] [email protected] 1 points 7 months ago (1 children)

Which means this person is NOT simply a volunteer as you insinuated here:

You're missing this part of what they said, take a second look (bolded part)...

if a tech company would like to pay me to do ~anything for my open source project, we can sit down and talk about my rates.

That means they haven't been paid yet, they're doing volunteer work, and they're soliciting publicly for pay to do the work that we would all expect volunteers to do already anyways, making sure their code is secure, which, is my point.

And the rest of that quote...

Otherwise they can fuck right off and I'm going to do what I want with my project.

They're signaling publicly that since they're not getting paid to do the work they can do any level of effort, not just the required (security wise) effort.

We shouldn't assume that full diligent effort is being done to secure the code, just because it's open source and easily readable by anyone. Doesn't matter if there's easy access if no one ever actually looks at it.

I'm not saying it's never done, I'm just saying we should not assume it's always being done (my bet would be more often than not, it's not) and that is a real problem, as this story/situation demonstrates. Capitalism, human nature, and volunteer versus paid work efforts, based on available hours to do the job correctly.

I really wish you would just stop trying to defend Linux and open source development, and listen to the concept/opinion I'm actually stating, because it's really important for all of us that depends on open source efforts to be aware of it and act on it, not just stick our heads in the sand about it.

[–] [email protected] 0 points 7 months ago (1 children)

Your interpretation is simply not supported by the literal words being said by the person. "we can sit down and talk about my rates" implies that this person already has rates that they charge for the labor they do.

You're projecting a meaning into the person's words that simply aren't there because you want it to fit a narrative that has is not commensurate with reality.

You brought up your credentials earlier so now I'll bring up mine: My full time job, which I get paid a very competitive salary for, is to develop exclusively open source software. I have many collaborators in the industry, both at my same organization and from others (some non profits, some academic labs, some government agencies, but mostly private for-profit organizations) who contribute to open source projects either full time or part time.

I don't have one single collaborator who is the mythical unreliable open source volunteer you're talking about. Every single person I've worked with has a commercial or professional (i.e. academic, mission-driven) interest in the developmental health of open source software. When we decide what dependencies we use, we rule out anything that looks like a pet project or something with amateur maintenance because we know if the maintainer slacks off or goes rogue then that's going to be our problem.

The xz case is especially pernicious. This is a person who by all initial appearances was a respected professional doing respectable work. He/they (perhaps there was a team involved) went to great lengths to quietly infiltrate the ecosystem. I guarantee someone could do the same thing at a private company, but admittedly they're less likely to have as broad of an impact as they can by targeting the open source ecosystem.

I really wish you would just stop trying to defend Linux and open source development, and listen to the concept/opinion I'm actually stating

I am listening, and I'm telling you that you're wildly misunderstanding the nature of the open source industry. You, like many many other software developers, are ignorant about the vast bulk of widely used open source software gets developed.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

Your interpretation is simply not supported by the literal words being said by the person. “we can sit down and talk about my rates” implies that this person already has rates that they charge for the labor they do.

A reminder of the actual tweet...

"tech companies [...] started calling on open-source maintainers to beef up project governance. [...] mandatory two-person code reviews, self-assessments, SLAs, and written succession plans."

Speaking as an open source maintainer, if a tech company would like to pay me to do ~anything for my open source project, we can sit down and talk about my rates. Otherwise they can fuck right off and I'm going to do what I want with my project.

The point is not what the actual dollar amount would be, the point is distinguishing volunteer work that is currently being done for free versus future paid work that would be done, and to be able to dictate terms and how the work is to be done (security checks, etc.).

So at this point, I disagree with what you are saying, and I stand by what I've said.

Further, it's not worth my time discussing this further with you in particular. Apparently we live in two different realities, and you're completely knowledgeable about open source, where you know for a fact that I am not. Kind of hard the bridge that gap, conversationally. But at the end of the day, I can believe you, or my lying eyes (to quote Groucho Marx).

And actually at this point, after having spoken with you, especially with your latest comment where you stated what work you do/did for open source, I'm more fearful for open source codebases than I was before. Open source developers who take things personally, and with a 'can do no wrong' mindset, they just set themselves up for more security attacks.

Have a nice day.

[–] [email protected] 0 points 7 months ago (1 children)

Nothing about the portion of the sentence you highlight actually implies that they haven't already been getting paid to do open source work. That's an interpretation that you're projecting onto the sentence because it fits your narrative. The poster never identified themself to be a volunteer. I've already reframed the sentence for you in a previous post, but I'll try one more time: "Whenever any tech company is willing to pay me to do work related to my open source project, I sit down with them and talk about my rates" is a semantically equivalent sentence to what the poster said.

You're also taking one single datapoint which has ambiguous credibility to begin with and extrapolating it to characterize a massive industry that you, like countless others, benefit from while hardly knowing anything about how the sausage gets made.

I'd be surprised if you've ever offered a substantive contribution to an open source project in your life, so I won't be losing any sleep if a freeloader loses confidence in the ecosystem. But realistically you'll be using open source software for the rest of your life because the reality is that closed source software really can't compete in terms of scale, impact, and accessibility. If you actually care about the quality and security of the things you depend on, then do something about it. And prattling ignorance on social media does not count as doing something.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (2 children)

Sorry, realize I told you I was done with our conversation, but after doing so I stumbled upon this video, and thought I would share it with you, as its pertinent to the issue we were discussing.

You keep arguing that open source projects are strict with their code base reviews and such and are as reliable as close sourced products, and I keep seeing others saying that they are not suppliers, and everything is "as is". We can't both be right.

I don't plan on responding to you if you reply to this comment, as IMHO it would be a waste of time, as you'll just twist this video so that its saying the opposite of what its actually saying.

[–] [email protected] 1 points 7 months ago

Here is an alternative Piped link(s):

this video

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago)

You keep arguing that open source projects are strict with their code base reviews

Go ahead and quote the words I said that suggest this. You have a talent for claiming that people have said things they have never actually said.

The only claims I've made in this conversation are:

  1. The open source ecosystem does NOT strictly rely on confidence in individual project maintainers because audits and remedial measures are always possible, and done more often than most people are aware of. Of course this could and should be done more often. And maybe it would if we didn't have so many non-contributing freeloaders in the community.
  2. Most of the widely used open source projects are not being done by hobbyists or volunteers but rather by professionals who are getting paid for their work, either via a salary or by commission as independent contractors.
  3. You don't seem to have a firm grasp on how open source software is actually developed and managed in general.