d3Xt3r

joined 1 year ago
MODERATOR OF
[–] [email protected] 4 points 7 months ago (1 children)

And the second half of that solution (audio fingerprinting) can be solved using this: https://github.com/worldveil/dejavu

[–] [email protected] -1 points 7 months ago (1 children)

Also, AMD APUs use your main RAM, and some systems even allow you to change the allocation - so you could allocate say 16GB for VRAM, if you've got 32GB RAM. There are also tools which allow you can run to change the allocation, in case your BIOS does have the option.

This means you can run even LLMs that require a large amount of VRAM, which is crazy if you think about it.

[–] [email protected] 5 points 7 months ago (1 children)

Also, are there any modern recovery tools out there, that promise better reliability?

If Recuva didn't work, then you'd need to use a professional tool such as Runtime Software's GetDataBack. They've been working on it for over two decades - all the way since 2001, and I've used it on a few occasions with good success where TestDisk didn't work. The catch is that it's not free, but you can download the trial version first and run it to see if it can recover (preview) your files. And if the results are promising then you can buy the license and recover the files (no need to rerun the scan).

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago)

If I ever have to use a command line for anything but THE most esoteric, potentially system-damaging scenarios

But you don't have to though, at least if you're running a sensible distro and have Linux-friendly hardware. My elderly parents for instance have been running Linux for over a decade now (Xubuntu first, now Zorin) - on bog standard Dell machines - and never once had to touch the command-line. I think I intervened a couple of times a maybe 4 or 5+ years ago, but haven't had to any major tech support or CLI intervention in the few years.

Linux has come a long way. If you've got compatible hardware and don't have any specific proprietary sofeare requirements (like Adobe etc), then I'd recommend giving it a try. If you're open-minded that is.

[–] [email protected] 12 points 7 months ago* (last edited 7 months ago)

Running recent AMD hardware and gaming. I have a ThinkPad Z13 with a Zen 3+ APU, a performance-oriented homelab machine with a recent Zen 4 APU, and a Zen 2 gaming desktop with a recent AMD GPU.

For the laptop, my main concerns are battery life, desktop responsiveness and gaming performance. As you may or may not be aware, the AMD space has seen a flurry of development activity these past couple of years thanks to Valve and the Steam Deck. There have been several improvements in the power management aspect in recent kernels, specifically the AMD p-state EPP driver. For desktop responsiveness, the new EEVDF scheduler has been a groundbreaking improvement over the old CFS scheduler. Finally, for gaming, there have been tons of performance improvements and bug fixes in the Mesa and Vulkan drivers, and as a laptop gamer I always aim to squeeze every bit of FPS I can get out of it. For some games, a recent Mesa makes a huge difference.

I also appreciate the improvements to the in-kernel NTFS3 driver since kernel 6.2 (where some important mount options were added) and most recently (tail end of 6.7) a bunch of bug fixes were also merged. I use an NTFS-formatted external drive for archival and file sharing between different machines (I also use macOS and Windows, hence why I went with NTFS), so any improvements to the NTFS3 driver is something I look forward to.

Next is my homelab setup, it's recent bleeding edge AMD hardware which runs a ton of VMs (Openshift container platform, Docker, Postgres and a bunch of web apps). When I'm working on it, I also use it for dev stuff and some work stuff - whilst all the VMs and containers are running in the background. So once again, I'm looking for stuff like EEVDF for desktop responsiveness, but also improvements to KVM or virtualisation performance in general. I'm also really excited for the upcoming kernel 6.9, because of the KSMBD and bcachefs improvements - particularly the latter, since I intended to evaluate a tiered storage setup using bcachefs, and if it's any good, I'll make the switch from btrfs.

Finally, for my gaming PC - obviously I'm always after the latest Mesa and Vulkan improvements, as well as overall desktop responsiveness and performance. In addition, I also care about things like VRR and HDR support, and all the Wayland-related improvements across the spectrum. All of which have seen vast improvements in recent times.

I mainly run Arch (with Cachy repos), which allows me to use optimised x86-64-v3/v4 packages for the best performance, as well as special AMD-GPU optimised Mesa/vulkan/vdpau/vaapi drivers which is available only for Arch (as far as I'm aware; but maybe there's a PPA for *buntu as well?). In any case, with Arch I'm able to easily fine-tune and get the most out of my systems.

So there you go, this is why I chase after recent packages and why Mint isn't suitable for me. I know if you wanted to, Mint users could subscribe to PPAs like Oibaf or something, or manually install recent kernels, but then you'd break the system and that defeats the whole point of Mint's focus on stability. On the other hand, I don't mind recommending it for someone who's main use case is primarily home-office/web browsing etc and they have an older system. But for power users, gamers, or those who have recent hardware, I definitely cannot recommend Mint in good faith.

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (4 children)

GNU is not a license, it's a project, one that practically spearheaded the whole FOSS movement back in the 80s. The programs that were part of the GNU project were licenced under the GNU General Public License (GPL), which was originally written by Richard Stallman, and evolved over time to its current version, GPLv3 (now backed by the Free Software Foundation). So the "GPL" is the actual license that can be applied to any program, should the developer choose to do so (so it's not limited just to the GNU project).

All GPL licenced programs are considered to be FOSS. However, FOSS can also imply other licenses such as MIT, LGPL, Apache etc. Most of them are kinda similar, but the way but differ slightly on how permissive/restrictive it is when it comes to modifications and derivatives.

why are some many people saying charging for software isn't Foss when Richard stalman himself makes the point "This is a matter of freedom, not price, so think of “free speech,” not “free beer.”"

As you said, it's not about the price at all, the "free" means freedom. Specifically, the GPL explicitly states that you may charge money for the software. Other free software licences also generally state something similar.

The confusion regarding selling is best explained by the FSF:

Selling a copy of a free program is legitimate, and we encourage it.

However, when people think of “selling software,” they usually imagine doing it the way most companies do it: making the software proprietary rather than free.

So unless you're going to draw distinctions carefully, the way this article does, we suggest it is better to avoid using the term “selling software” and choose some other wording instead. For example, you could say “distributing free software for a fee”—that is unambiguous.

https://www.gnu.org/philosophy/selling.html

Also, just to be clear, opensource =/= FOSS. Opensource just means that the source code is available, FOSS however implies that you're free to modify and redistribute the program (+ some other freedoms/restrictions as per the specific license used).

[–] [email protected] 66 points 7 months ago* (last edited 7 months ago) (10 children)

From what I've heard so far, it's NOT an authentication bypass, but a gated remote code execution.

There's some discussion on that here: https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b

But it would be nice to have a similar digram like OP's to understand how exactly it does the RCE and implements the SSH backdoor. If we understand how, maybe we can take measures to prevent similar exploits in the future.

[–] [email protected] 41 points 7 months ago (8 children)

Nice! The outdated kernel was one of the main reasons why I never recommended using Mint. Now, if they can do something about their other outdated packages like Mesa - and switch to Wayland - I'd be happy to recommend Mint.

[–] [email protected] 100 points 7 months ago (14 children)

This is informative, but unfortunately it doesn't explain how the actual payload works - how does it compromise SSH exactly?

[–] [email protected] 10 points 7 months ago* (last edited 7 months ago) (1 children)

They noticed they accidentally removed a 35GB folder full of media files from a very big vacation, including nature photography and some strange GoPro format files. Valuable stuff.

Are you sure the files were actually deleted? I used to work in helpdesk back in the day and would regularly get calls from users in similar situations, and 9 out of 10 times the folder wasn't actually deleted but accidentally moved to somewhere else - Windows Explorer is dumb like that, it's very easy to accidentally drag-drop a large folder elsewhere without any confirmation - just a flick of the wrist and you wouldn't even notice it. On the other hand, actually deleting a large folder not only presents a confirmation dialog, it also takes a long time to delete the files - and you'd notice it very quickly (unless you were AFK).

So I'd recommend running a thorough search first - both on the old drive and new drive.

But if the files were actually deleted, I also second the recommendation of Recuva - IMO it just works better on NTFS drives, compared to Phtotorec. After all, if the photos are really that valuable then you really should be using the best tools available at your disposal.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago) (3 children)

Due to the userbase being all windows fans we'd need a full on GUI and i've been prodded towards Mint. Good idea or bad?

That is completely up to what their requirements are (which applications they use, workflow etc) and what your users are like. Some users are extremely resistant to change - and have connections to people in high places - so you'll need to think about how to handle them. Like back in my helpdesk days, we had a bunch of VIP users and admin staff oppose the upgrade to Office 2007 (from XP/2003), mainly due to its new ribbion interface, and also incompatibility with some of their custom macros etc. We were midway thru the rollout and ended up completely halting the upgrades due to the fuss they kicked up. Office XP/2003 was already way out of support, but they didn't care or listen.

So yea, you'll need to ask your users, not us.

[–] [email protected] 9 points 7 months ago* (last edited 7 months ago)

In the sysadmin world, the current approach is to follow a zero-trust and defense-in-depth model. Basically you do not trust anything. You assume that there's already a bad actor/backdoor/vulnerability in your network, and so you work around mitigating that risk - using measures such as compartmentalisation and sandboxing (of data/users/servers/processes etc), role based access controls (RBAC), just-enough-access (JEA), just-in-time access (JIT), attack surface reduction etc.

Then there's network level measures such as conditional access, and of course all the usual firewall and reverse-proxy tricks so you're never exposing a critical service such as ssh directly to the web. And to top it all off, auditing and monitoring - lots of it, combined with ML-backed EDR/XDR solutions that can automatically baseline what's "normal" across your network, and alert you of any abnormality. The move towards microservices and infrastructure-as-code is also interesting, because instead of running full-fledged VMs you're just running minimal, ephemeral containers that are constantly destroyed and rebuilt - so any possible malware wouldn't live very long and would have to work hard at persistence. Of course, it's still possible for malware to persist in a containerised environment, but again that's where the defense-in-depth and monitoring comes into play.

So in the case of xz, say your hacker has access to ssh - so what? The box they got access to was just a jumphost, they can't get to anywhere else important without knowing what the right boxes and credentials are. And even if those credentials are compromised, with JEA/JIT/MFA, they're useless. And even if they're compromised, they'd only get access into a very specific box/area. And the more they traverse across the network, the greater the risk of leaving an audit trail or being spotted by the XDR.

Naturally none of this is 100% bullet-proof, but then again, nothing is. But that's exactly what the zero-trust model aims to combat. This is the world we live in, where we can no longer assume something is 100% safe. Proprietary software users have been playing this game for a long time, it's about time we OSS users also employ the same threat model.

view more: ‹ prev next ›