Flmaker

joined 1 month ago
[–] Flmaker@lemmy.world 2 points 4 days ago

Thanks for sharing the link! I wouldn't have known about it otherwise.

[–] Flmaker@lemmy.world -3 points 5 days ago (3 children)

I consider myself an open-source user, but I struggle to understand why I should trust these projects when I lack the technical knowledge to evaluate the underlying code, which is frequently updated. I am skeptical about the enthusiasm surrounding open-source software, especially since it is practically impossible for an independent auditor to verify every update.

This raises the question of why we should place our trust in these systems.

Then through intensive search and I have found similar doubts in many online communities including the one you have mentioned

I feel compelled to raise this issue, as it may help me—and others—better understand the rationale behind the blind trust placed in open-source software.

Additionally, I have noticed that open-source supporters often seem hesitant to address this dilemma. I wanted to bring this concern to the community here by sharing the opinions in other places and ask if I am the only one (or one of the very few) who harbors doubts.

This is why I believe it is a very important topic for me to share & interact with the members (who are more knowledgeable than I am) here which is my END GOAL for your specific question.

Meanwhile, I will continue using open-source applications as I seek out like-minded individuals who share my doubt and search for a further scrutiny .

[–] Flmaker@lemmy.world 1 points 6 days ago* (last edited 6 days ago)

Take Open Source with a Grain of Salt: The Real Trust Dilemma

In the age of open-source software, there is a growing assumption that transparency inherently guarantees security and integrity. The belief is that anyone can check the code, find vulnerabilities, and fix them, making open-source projects safer and more reliable than their closed-source counterparts. However, this belief is often oversimplified, and it’s crucial to take open-source with a grain of salt. Here’s why.

The Trust Dilemma: Can We Really Trust Open Source Code?

There’s a famous story from the world of open-source development that highlights the complexity of this issue. Linus Torvalds, the creator of Linux, was once allegedly asked by the CIA to insert a backdoor into the Linux kernel. His response? He supposedly said, "No can do, too many eyes on the code." It seems like a reassuring statement, given the vast number of contributors to open-source projects, but it doesn’t fully account for the subtleties of how code can be manipulated.

Not long after Torvalds’ alleged interaction, a suspicious change was discovered in the Linux kernel—an "if" statement that wasn't comparing values but instead making an assignment to user ID 0 (root). This change wasn't a mistake; it was intentional, and yet it slipped through the cracks until it was discovered before going live. The question arises: who had the power to insert such a change into the code, bypassing standard review processes and security protocols? The answer remains elusive, and this event highlights a critical reality: even the open-source community isn’t immune to vulnerabilities, malicious actors, or hidden agendas.

Trusting the Maintainers

In the world of open-source, you ultimately have to trust the maintainers. While the system allows for community reviews, there’s no guarantee that every change is thoroughly vetted or that the maintainers themselves are vigilant and trustworthy. In fact, history has shown us that incidents like the XZ Utils supply-chain attack can go unnoticed for extended periods, even with a large user base. In the case of XZ, the malware was caught by accident, revealing a stark reality: while open-source software offers the potential for detection, it doesn’t guarantee comprehensive oversight.

It’s easy to forget that the very same trust issues apply to both open-source and closed-source software. Both models are prone to hidden vulnerabilities and backdoors, but in the case of open-source, there’s often an assumption that it’s inherently safer simply because it’s transparent. This assumption can lead users into a false sense of security, which can be just as dangerous as the opacity of closed-source systems.

The Challenge of Constant Auditing

Let’s be clear: open-source code isn’t guaranteed to be safe just because it’s open. Just as proprietary software can hide malicious code, so too can open-source. Consider how quickly vulnerabilities can slip through the cracks without active, ongoing auditing. When you’re dealing with software that’s updated frequently, like Signal or any other open-source project, it’s not enough to have a single audit—it needs to be audited constantly by developers with deep technical knowledge with every update

Here’s the catch: most users, particularly those lacking a deep understanding of coding, can’t assess the integrity of the software they’re using. Imagine someone without medical expertise trying to verify their doctor’s competence. It’s a similar situation in the tech world: unless you have the skills to inspect the code yourself, you’re relying on others to do so. In this case, the “others” are the project’s contributors, who might be few in number or lack the necessary resources for a comprehensive security audit.

Moreover, open-source projects don’t always have the manpower to conduct ongoing audits, and this becomes especially problematic with the shift toward software-as-a-service (SaaS). As more and more software shifts its critical functionality to the cloud, users lose direct control over the environment where the software runs. Even if the code is open-source, there’s no way to verify that the code running on the server matches the open code posted publicly.

The Reproducibility Issue

One of the most critical issues with open-source software lies in ensuring that the code you see matches the code you run. While reproducible builds are a step in the right direction, they only help ensure that the built binaries match the source code. But that doesn’t guarantee the source code itself hasn’t been altered. In fact, one of the lessons from the XZ Utils supply-chain attack is that the attack wasn’t in the code itself but in the build process. The attacker inserted a change into a build script, which was then used to generate the malicious binaries, all without altering the actual source code.

This highlights a crucial issue: even with open-source software, the integrity of the built artifacts—what you actually run on your machine—can’t always be guaranteed, and without constant scrutiny, this risk remains. It’s easy to assume that open-source software is free from these risks, but unless you’re carefully monitoring every update, you might be opening the door to hidden vulnerabilities.

A False Sense of Security

The allure of open-source software lies in its transparency, but transparency alone doesn’t ensure security. Much like closed-source software, open-source software can be compromised by malicious contributors, dependencies, or flaws that aren’t immediately visible. As the XZ incident demonstrated, even well-established open-source projects can be vulnerable if they lack active, engaged contributors who are constantly checking the code. Just because something is open-source doesn’t make it inherently secure.

Moreover, relying solely on the open-source nature of a project without understanding its review and maintenance processes is a risky approach. While many open-source projects have a strong track record of security, others are more vulnerable due to lack of scrutiny, poor contributor vetting, or simply not enough people actively reviewing the code. Trusting open-source code, therefore, requires more than just faith in its transparency—it demands a keen awareness of the process, contributors, and the ongoing review that goes into each update.

Conclusion: Take Open Source with a Grain of Salt

At the end of the day, the key takeaway is that just because software is open-source doesn’t mean it’s inherently safe. Whether it’s the potential for hidden backdoors, the inability to constantly audit every update, or the complexities of ensuring code integrity in production environments, there are many factors that can undermine the security of open-source projects. The fact is, no system—open or closed—is perfect, and both models come with their own set of risks.

So, take open source with a grain of salt. Recognize its potential, but don’t assume it’s free from flaws or vulnerabilities. Trusting open-source software requires a level of vigilance, scrutiny, and often, deep technical expertise. If you lack the resources or knowledge to properly vet code, it’s crucial to rely on established, well-maintained projects with a strong community of contributors. But remember, no matter how transparent the code may seem, the responsibility for verification often rests on individual users—and that’s a responsibility that’s not always feasible to bear.

In the world of software, the real question is not whether the code is open, but whether it’s actively maintained, thoroughly audited, and transparently reviewed

AFTER

EVERY

SINGLE

UPDATE.

Until we can guarantee that, open-source software should be used with caution, not blind trust.

[–] Flmaker@lemmy.world 1 points 6 days ago

better the devil you know, I suppose

-1
submitted 6 days ago* (last edited 6 days ago) by Flmaker@lemmy.world to c/privacy@lemmy.world
 
 

Trusting Open Source: Can We Really Verify the Code Behind the Updates?

In today's fast-paced digital landscape, open-source software has become a cornerstone of innovation and collaboration. However, as the FREQUENCY and COMPLEXITY of UPDATES increase, a pressing question arises: how can users—particularly those without extensive technical expertise—place their trust in the security and integrity of the code?

The premise of open source is that anyone can inspect the code, yet the reality is that very few individuals have the time, resources, or knowledge to conduct a thorough review of every update. This raises significant concerns about the actual vetting processes in place. What specific mechanisms or community practices are established to ensure that each update undergoes rigorous scrutiny? Are there standardized protocols for code review, and how are contributors held accountable for their changes?

Moreover, the sheer scale of many open-source projects complicates the review process. With numerous contributors and rapid iterations, how can we be confident that the review processes are not merely cursory but genuinely comprehensive and transparent? The potential for malicious actors to introduce vulnerabilities or backdoors into the codebase is a real threat that cannot be ignored. What concrete safeguards exist to detect and mitigate such risks before they reach end users?

Furthermore, the burden of verification often falls disproportionately on individual users, many of whom may lack the technical acumen to identify potential security flaws. This raises an essential question: how can the open-source community foster an environment of trust when the responsibility for code verification is placed on those who may not have the expertise to perform it effectively?

In light of these challenges, it is crucial for the open-source community to implement robust mechanisms for accountability, transparency, and user education. This includes fostering a culture of thorough code reviews, encouraging community engagement in the vetting process, and providing accessible resources for users to understand the software they rely on.

Ultimately, as we navigate the complexities of open-source software, we must confront the uncomfortable truth: without a reliable framework for verification, the trust we place in these systems may be misplaced. How can we ensure that the promise of open source is not undermined by the very vulnerabilities it seeks to eliminate?"

 

cross-posted from: https://lemmy.world/post/27344091

  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

 
  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

[–] Flmaker@lemmy.world 1 points 2 weeks ago* (last edited 1 week ago)

Thank you for that I have been testing it right now and trying to get the full articles at once through the rules if any, unable to find a solution yet, will search further if I could get the full articles at once for offline reading

Update: Unable to get the full article. other than the original page view

Followed the following info, it doesn't always work

Convert Partial Articles in Feeds to Full-Text Articles

Feedbro has built-in engine for transforming partial text feed articles
 to full-text articles. This feature is not automatically on for all feeds 
and must be enabled per feed. 
To do that, right-click the feed in the feed tree and select 
Properties. Then use "Feed Entry Content" for adjusting full text 
extraction settings. Click "Preview" button to check that 
you get desired results and then press "Save".

Note that obviously full-text option should not be used 
if the feed already 
provides full articles. Also the full-text conversion doesn't 
work for all feeds and sites but it works pretty well 
for majority of sites. 

so far the only one which has been suggested is Chaski is the one I like ,

but you need to be online to retrieve the full article one by one first, then it lets you read the full article offline

Nothing like a podcast player which always downloads the full article, to be read offline

[–] Flmaker@lemmy.world 2 points 2 weeks ago* (last edited 1 week ago)

Thank you, I have just tried that. Fluent Reader doesn't cache

so far the only one which has been suggested is Chaski is the one I like ,

but you need to be online to retrieve the full articles one by one first, then it lets you read the full article offline

Nothing like a podcast player which always downloads the full article, to be read offline

[–] Flmaker@lemmy.world 2 points 2 weeks ago* (last edited 1 week ago)

Thank you , so far the only one which has been suggested is Chaski is the one I like ,

but you need to be online to retrieve the full articles one by one first, then it lets you read the full article offline

Nothing like a podcast player which always downloads the full article, to be read offline

[–] Flmaker@lemmy.world 2 points 2 weeks ago* (last edited 1 week ago)

I agree, so far the only one which has been suggested is Chaski is the one I like , but you need to be online to retrieve the full articles one by one first, then it let you read the full article offline Nothing like a podcast player which always downloads the full article, to be read offline

 

Need Your Suggestions: RSS Reader for Windows PC

I have been happy with a podcast player's feed reader on my Android for some time,

but I am about to give up because its screen size makes it difficult to read long articles and need an app for windows PC (getting the full text then let me read them offline)

I would appreciate your guidance on the best recommended RSS readers for Windows PC that are:

-Visually good app  for a Windows Laptop
-Able to get the feeds with full text then let me read them offline
 

Need Your Suggestions: RSS Reader for Windows PC

I have been happy with a podcast player's feed reader on my Android for some time,

but I am about to give up because its screen size makes it difficult to read long articles and need an app for windows PC (getting the full text then let me read them offline)

I would appreciate your guidance on the best recommended RSS readers for Windows PC that are:

Visually good app  for a Windows Laptop

Able to get the feeds with full text then let me read them offline
[–] Flmaker@lemmy.world 1 points 3 weeks ago

This is the only one way I managed to get it done Thank you

[–] Flmaker@lemmy.world 1 points 1 month ago (2 children)

tried all those already, none seems to be working for the portable Have you actually set up the portable librewolf yourself as a default browser?

 

Appreciate your help please

[–] Flmaker@lemmy.world 1 points 1 month ago (1 children)

Recent News: If VPNs are targeted, cloud accounts could be compromised too Massive brute force attack uses 2.8 million IPs to target VPN devices https://www.bleepingcomputer.com/news/security/massive-brute-force-attack-uses-28-million-ips-to-target-vpn-devices/

[–] Flmaker@lemmy.world 1 points 1 month ago

Recent News: If VPNs are targeted, cloud accounts could be compromised too Massive brute force attack uses 2.8 million IPs to target VPN devices https://www.bleepingcomputer.com/news/security/massive-brute-force-attack-uses-28-million-ips-to-target-vpn-devices/

 

Dear Friends,

I just wanted to take a moment to sincerely thank you everyone for your incredibly thoughtful and detailed responses for the films in general, while I find myself in a difficult situation when it comes to safeguarding the PERSONAL FAMILY PHOTOS and VIDEOS.

  • On one hand, if I choose to store them online/cloud encrypted / (edit: encrypt first then upload it), I face significant privacy concerns. While they might be secure now, there’s always the potential for a very near future breaches or compromises, especially with the evolving risks associated with AI training and data misuse.

The idea of the personal moments being used in ways I can’t control or predict is deeply unsettling.

  • On the other hand, keeping these files offline doesn’t feel like a perfect solution either. There are still considerable risks of losing them due to physical damage, especially since I live in an area prone to earthquakes. The possibility of losing IRREPLACEABLE MEMORIES due to natural disasters or other unforeseen events is always a WORRY.

How can I effectively balance these privacy, security, and physical risks to ensure the long-term safety and integrity of the FAMILY’S PERSONAL MEMORIES?

Are there strategies or solutions that can protect them both digitally and physically, while minimizing these threats?

 

How do you ensure privacy and security on cloud platforms in an age of compromised encryption, backdoors, and AI-driven hacking threats to encryption and user confidentiality?

Let’s say you’ve created a film and need to securely upload the master copy to the cloud. You want to encrypt it before uploading to prevent unauthorized access. What program would you use to achieve this?

Now, let’s consider the worst-case scenario: the encryption software itself could have a backdoor, or perhaps you're worried about AI-driven hacking techniques targeting your encryption.

Additionally, imagine your film is being used to train AI databases or is exposed to potential brute-force attacks while stored in the cloud.

What steps would you take to ensure your content is protected against a wide range of threats and prevent it from being accessed, leaked, or released without your consent?

view more: next ›