rentar42

joined 1 year ago
[–] [email protected] 1 points 7 months ago

Now you make me feel old. In "the olden days" before streaming of media over the internet was as commonplace as it was now, that was the standard way that tech-savy people consumed media: Either on their PC or with some set-top box with built-in storage. I fondly remember my PopcornHour, which was basically a line of desktop-boxes that ranged from "basically a hard disk, video decoder and HDMI out" all the way to "can automatically rip your BlueRays".

[–] [email protected] 11 points 7 months ago

A custom "source available" license that may not be as clear-cut as intended and depends on "we know it when we see it" by the authors of the license? You don't say!

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

I've not tried that myself, but AFAIK VLC can be remote controlled in various ways, and since the API for that is open, multiple clients for it exist: https://wiki.videolan.org/Control_VLC_from_an_Android_Phone

There's also Clementine which offers a remote-control Android app.

[–] [email protected] 8 points 8 months ago

https://lemmy.world/post/12995686 was a recent question and most of the answers will basically be duplicates of that.

One slight addition I want to add: "Docker" is just one implementation of "OCI containers". It's the one that broke through initially in the hype, but you can just as easily use any other (podman being a popular one) and basically all of the benefits that people ascribe to "docker" can be applied to.

So you might (as I do) have some dislike for docker (the product) and still enjoy running containers.

[–] [email protected] 15 points 8 months ago* (last edited 8 months ago)

I personally prefer podman, due to its rootless mode being "more default" than in docker (rootless docker works, but it's basically an afterthought).

That being said: there's just so many tutorials, tools and other resources that assume docker by default that starting with docker is definitely the less cumbersome approach. It's not that podman is signficantly harder or has many big differences, but all the tutorials are basically written with docker as the first target in mind.

In my homelab the progression was docker -> rootless docker -> podman and the last step isn't fully done yet, so I'm currently running a mix of rootless docker and podman.

[–] [email protected] 6 points 8 months ago (1 children)

You've got a single, old HDD attached via USB. There's plenty of places that could be the bottleneck here, but that's among the first I'd check. Can you actually read from that HDD significantly faster than your network transfer speed? Check that locally first. No use in optimizing anything network-related when your underlying disk IO is slow.

[–] [email protected] 44 points 8 months ago (4 children)

In the immortal words of Jake the Dog:

Dude, suckin’ at something is the first step to being sorta good at something.

We are or were all noobs once. Going away from the keyboard is often an undervalued step in the solution-finding process. Kudos!

[–] [email protected] 1 points 8 months ago

Given the very specific dependencies that Immich has wrt. the Postgres plugins it needs, I'm certain that it's not currently packaged as an RPM and I would even bet that it never will be (at least not as one of the officially supported packages put out by the developers).

[–] [email protected] 4 points 8 months ago

Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it's got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
Other "better" HDDs were entirely unresponsive.

Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won't be buying hundreds of HDDs in their life.

[–] [email protected] 3 points 8 months ago (1 children)

Was about to post this, this works well for me.

In my case I'm storing the DB on my Google Drive for now, but Keepass2Android supports many different systems, including "generic" things like WebDAV, so really anything should work.

While Keepass2Android is integrated with the syncing and will always check for conflicts (i.e. check for latest version before saving), the same isn't necessarily true for the desktop client. But since I rarely edit from both devices at the same time, anything that syncs to the Desktop in a somewhat realtime fashion should work just fine.

And for the few (long ago) cases where updates were overwritten, the "previous version" feature of Google Drive was god-sent! (And KeepassX can simply merge the old overwritten version into the current one and you'll get the correct merge).

[–] [email protected] 7 points 9 months ago (1 children)

I think the difference is at what level:

  • don't implement your own storage redundancy system at the kernel level with a small team in a closed-source fashion, because that's the kind of thing that needs many eyes, lots of experience and many millions of hours real-world usage to fully debug and make sure it work.
  • do build your own system by combining pre-existing technologies that are built by experienced teams and tested/vetted by wide/popular usage.

I feel OPs critique has some truth to it. I personally would rather stay with raidz by zfs, exactly because of it's open nature (yes, they too have bugs, nothing is perfect).

[–] [email protected] 8 points 9 months ago (1 children)

Do you have any devices on your local network where the firmware hasn't been updated in the last 12 month? The answer to that is surprisingly frequently yes, because "smart device" companies are laughably bad about device security. My intercom runs some ancient Linux kernel, my frigging washing machine could be connected to WiFi and the box that controls my roller shutters hasn't gotten an update sind 2018.

Not everyone has those and one could isolate those in VLANs and use other measures, but in this day and age "my local home network is 100% secure" is far from a safe assumption.

Heck, even your router might be vulnerable...

Adding HTTPS is just another layer in your defense in depth. How many layers you are willing to put up with is up to you, but it's definitely not overkill.

view more: ‹ prev next ›