d3Xt3r

joined 1 year ago
MODERATOR OF
[–] [email protected] 1 points 11 months ago

Nincompoop! Bashi-bazouk! Visigoth! Anacoluthon!

[–] [email protected] 13 points 11 months ago* (last edited 11 months ago) (1 children)

Sorry, I guess I meant Docker Desktop, and some of their other proprietary business/enterprise tools (like Docker Scout) that companies have started to use, the stuff that requires a paid subscription. The Docker engine itself remains opensource of course, but a lot of their stuff that's targeted at enterprises isn't. These days when companies say "Docker" they don't mean just the engine, they're referring to the entire ecosystem.

Also, I have a problem with Docker itself. My main issue is that, on Linux, native container tech like Podman/LXD work, perform and integrate better (at least, from my limited experience), but the industry prefers Docker (no surprises there). As a Linux guy, naturally I want to use the best tool for Linux, not what's cross-platform (when I don't care about other platforms). But I can understand why companies would prefer Docker.

[–] [email protected] 40 points 11 months ago* (last edited 11 months ago) (6 children)

It'll really depend on your local job market. I was on a serious job hunt earlier this year and I couldn't find a single Linux job which asked for LFCS certs. There were a couple which asked for Red Hat certs though. Of course, this could be specific to where I live, so I'd recommend looking at some popular job sites for where you live (+ remote jobs too) and see how many, if any, ask for LFCS, and you'd get your answer.

Should I focus more on dev ops? Security? Straight SysAdmin?

From what I've seen so far, the days of "traditional" Linux sysadmin roles are numbered, if not long gone already - it's all mostly DevOps-y stuff. Same with traditional security, these days it's more about DevSecOps.

As a modern Linux sysadmin, the technologies you should be looking at would be Ansible, Kubernetes, Terraform, containers (Docker mainly, but also Podman/LXD), GitOps, CI/CD and Infrastructure as Code (IaC) concepts and tools.

Some Red Hat shops may also ask for OpenShift, Ansible Tower, Satellite etc experience. IBM shops also use a lot of IBM tools such as IBM Could Paks, Multicloud Management, and AIOps/Watson etc.

And finally there's all the "cloud" stuff like AWS, Azure, GCP specific things - and they have their own terminologies that you'd need to know and understand (eg "S3", "Lambda" etc) and they have their own certs to go with it. I suspect a "cloud" cert will net you more jobs than LFCS.

So as you'd probably be thinking by now, all of the above isn't something you'd know from just using desktop Linux. Of course, desktop Linux experience is certainly useful for understanding some of the core concepts and how it all works under the hood, but unfortunately that experience alone just isn't going to cut it if you're out looking for a job.

As I mentioned before, start looking for jobs in your area/relevant to you and look at the technologies they're asking for, note down the terms which appear most frequently and the certs they're asking for, and start preparing for them. That is, assuming it's something you want to work with in the future.

Personally, I'm not a big fan all this new tech (I'm fine with Ansible and containers, but don't like the industry's dependency on proprietary techs like Docker Desktop, Amazon or Red Hat's stuff). I just wanted to work on pure Linux, with all the all standard POSIX/GNU tools and DEs that we're familiar with, but sadly those sort of jobs don't really exist anymore.

[–] [email protected] 20 points 11 months ago* (last edited 11 months ago) (1 children)

That's an issue/limitation with the model. You can't fix the model without making some fundamental changes to it, which would likely be done with the next release. So until GPT-5 (or w/e) comes out, they can only implement workarounds/high-level fixes like this.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (1 children)

If you're talking about the Storage Sense feature - it sucks. It only clears a handful of well-known locations, but it doesn't touch any of the orphaned content in C:\Windows\Installer, or the CSC or the old Panther folders from upgrades, not to mention several other files and folders in AppData. As I've said before, I've been a Windows sysadmin (until last year infact) managing over 20,000 devices, we've had Storage Sense on, but it's been mostly useless - to the point that I ended up writing own cleanup script and set it to run before we pushed out a new Windows feature update, because otherwise we'd get several devices which failed to update due to the disk being full.

[–] [email protected] 34 points 11 months ago* (last edited 11 months ago) (4 children)

FYI, Windows doesn't have any feature either to automatically clear all of it's temp folders (%TMP%, C:\Windows\Temp, C:\Windows\Panther), plus several other folders where orphaned files are often leftover, such as C:\Windows\Installer, C:\Windows\CSC, and various folders and cache files in your AppData\Local etc, to name a few off the top of my head.

I used to be a Windows sysadmin for a long time, and let me tell you, HDDs becoming completely full due to cache/temp files is very much a problem in Windows.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (4 children)

Containers aside, why would you want to use Incus to run VMs, when you've already got KVM/libvirt? Are there any performance/resource utilization/other advantages to using it?

[–] [email protected] 5 points 11 months ago (2 children)

This is old news. This article was published on 7 Aug 2023.

[–] [email protected] 7 points 11 months ago* (last edited 11 months ago) (1 children)

In one of my previous roles as a sysadmin, our company signed a deal with HP to directly supply enterprise laptops to one of our clients as part of Microsoft's Autopilot deployment model, so users could get a new/replacement laptops directly and get it customized on the fly at first logon, instead of us having to manually build it the traditional way and ship it out. It worked fine in our pilot testing, so we decided to roll out to the wider audience.

However, one problem which arose after the wider rollout, was that SCCM wasn't able to connect to any of these machines (we had it in co-management mode), and even the laptops which were able to communicate previously, stopped communicating. It was working fine in our pilot phase, but something was now blocking the traffic to SCCM and we couldn't figure it out - it was all okay on the network/firewall side, so we thought it could be a configuration issue on the SCCM server side so we raised a priority ticket with MS. After some investigation, we found the root cause - turned out out to be this nasty app called HP Wolf Security - which was new at the time - which HP started tacking on to all devices, unbeknownst to us. Wolf was supposed to be an "endpoint protection" solution - which no one asked for, especially since we already had Defender. Searched online and found tons of similar issues reported by other users, all caused by Wolf. Lost some of my respect for HP since then - who tf pulls stunts like this on an enterprise level?!

[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (1 children)

What kind of "everyday" server stuff is efficiently making use of ≈300 cores? It's clearly some set of tasks that can be done independently of one another, but do you know more specifically what kind of things people need this many cores on a server for?

Traditionally VMs would be the use case, but these days, at least in the Linux/cloud world, it's mainly containers. Containers, and the whole ecosystem that is built around them (such as Kubernetes/OpenShift etc) simply eat up those cores, as they're designed to scale horizontally and dynamically. See: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale

Normally, you'd run a cluster of multiple servers to host such workloads, but imagine if all those resources were available on one physical hosts - it'd be a lot more effecient, since at the very least, you'd be avoiding all that network overhead and delays. Of course, you'd still have at least a two node cluster for HA, but the efficiency of a high-end node still rules.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

A GPU is used for a lot more than just gaming these days. It's used to render videos, accelerate normal 2D programs (like some terminal emulators), accelerate some websites/webapps (those which use WebGL for eg); also modern DEs like Gnome and KDE also make use of it very heavily, for instance for animations and window transitions. Those smooth animations that you see when you activate the workspace switcher or window overview? That's your GPU at work there. Are your animations jittery/laggy? That means your setup is less than ideal. Of course, you could ignore all that and just go for a simple DE like XFCE or Mate which is fully CPU-driven, but then the issue of video acceleration still remains (unless you don't plan on watching HD videos).

Without the right drivers (typically NOT nouveau, unless you're on a very old card), you may find your overall experience less than ideal. As you can see in their official feature matrix , only the NV40 series card fully supports video acceleration - these are cards which were launched between 2004-2006 - that's practically ancient in computer terms and I highly doubt your PC uses one of those. Now recent-ish cards do support video acceleration, but you'll need to extract the firmware blobs from the proprietary drivers (which can be a PITA on normal Debian as it's a manual process), plus, even after that, the drivers won't support some features that may be required by normal programs, as you can see from the matrix.

The natural solution of course would be to install the proprietary nVidia drivers, but you do NOT want to do that (unless you're a desperate gamer) as there's a high possibility of running into issues like not being about to use Wayland properly, or breaking your system when you update it - just Google "Linux update black screen nVidia" and you'll see what I mean.

You'll be avoiding a lot of headache if you just went with AMD; or even just onboard graphics like Intel iGPUs (if your CPU has it) would be a much better option - because in either case, you'll be using fully capable and stable opensource drivers and you won't face any issues with that.

Also, watch this video: https://youtube.com/watch?v=OF_5EKNX0Eg

view more: ‹ prev next ›