rentar42

joined 1 year ago
[–] [email protected] 9 points 9 months ago* (last edited 9 months ago)

They are in fact the same image, as you can verify by comparing their digest:

$ docker pull ghcr.io/linuxserver/plex
Using default tag: latest
latest: Pulling from linuxserver/plex
Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
Status: Image is up to date for ghcr.io/linuxserver/plex:latest
ghcr.io/linuxserver/plex:latest
$ docker pull lscr.io/linuxserver/plex
Using default tag: latest
latest: Pulling from linuxserver/plex
Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
Status: Image is up to date for lscr.io/linuxserver/plex:latest
lscr.io/linuxserver/plex:latest
$

See how both images have the digest sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144. Since the digest uniquely identifies the exact content/image, that guarantees that those images are in fact byte-for-byte identical.

[–] [email protected] 7 points 9 months ago (1 children)

As others have mentioned (and also explained in quite some detail) you're trying to bite off a lot at once. First, for Jellyfin locally you can ignore most of that.

And if you really want to learn the ins and outs of all that (and I can recommend it, it's useful), then I suggest you start with some simple web app. Something like note taking or maybe even something trivial like a whoami service, which basically just echos some information it was sent back to you. That's super useful because you know that it is unlikely to be broken, so you can focus on the networking/port forwarding issues. And once you've got that working and have a rough feeling how this all works you can go on to more complex setups that actually do something useful.

[–] [email protected] 4 points 9 months ago

Those are usually the prefixes for interfaces which are not quite the same thing as networks. An interface is the surface that connects some device to a network. For example if your router treats its WLAN and its wired network as a single network (i.e. each thing on WLAN can see everything on wired and vice versa) then a specific device might still have a wlan1 and eth1 interface, each one reaching the respective physical network device, while being in the same network.

"One network" here really only means "something can successfully route between all the devices".

[–] [email protected] 5 points 9 months ago (6 children)

The EULA is just standard terms like don’t try to circumvent the license requirement, if you buy a license don’t share it with other people, some warranty and liability stuff, etc.

Yes, I know. I actually read it (which is rare) and it's mostly sensible stuff. The "no reverse engineering" clause just felt weird in something that claims to be "mostly open source".

In the end I find it slightly misleading to call this open-core when the app with just the non-commercial features can't be built full from the published source.

They are not necessary for basic core functionality but it doesn’t work without it as the license requirement could be disabled easily then as I mentioned before.

I don't quite understand this argument. If I can build a development version I can run any and all code in the repo (while providing an existing xpipe installation) and somehow I would be able to ship this, if I had criminal energies, so how exactly does this requirement prevent that?

In other words: if the only way to access the commercial features without a license is by doing something illegal then ... that's not really adding much burden, is it?

In the end I'm probably just one of the open-source proponents that don't like that, and that's fine. Not everyone needs to agree with everyone, there's a lot of space here where reasonable minds can disagree. I just think that claiming "the main application is open source" when it can't be built purely from the source is a bit misleading.

[–] [email protected] 4 points 9 months ago (8 children)

This looks really interesting.

I don't mind the commercialization at all and think it's actually a good sign for an open source project to have a monetization strategy to be able to hang around.

But why do I have to agree to a EULA on a Apache-licensed piece of software? I understand that for the commercial features that might be necessary, but in that case could we get a separate installer for "this is all Apache licensed, no need for a EULA"?

Additionally the contribution file mentions that "some components are only included in the release version and not in this repository.". What are these components? Are they necessary for the basic core functionality?

[–] [email protected] 23 points 10 months ago (12 children)

The issue is that according to the spec the two DNS servers provided by DHCP are equivalent. While most clients favor the first one as the default, that's not universally the case and when and how it switches to the secondary can vary by client (and effectively appear random). So you won't be able to know for sure which client uses your DNS, especially after your DNS server was unreachable for a while for whatever reason. Personally I've "just" gotten a second Pi to run redundant copies of PiHole, but only having a single DNS server is usually fine as well.

[–] [email protected] 3 points 10 months ago* (last edited 10 months ago)

Hint: you don't need to route all your traffic through your VPN to make use of the pihole adblocking: Just DNS. If your at home internet is even moderately stable/good then this should barely affect your roaming internet experience, since DNS traffic is such a small part of all traffic.

Also, since I'm already mirroring the configuration of my PiHole instance to a secondary one, I'm considering putting a tertiary one on some forever-free cloud server instance and just using that when not at home (put it into the same wireguard vpn to prevent security nightmares). That way my roaming private DNS wouldn't even depend on my home internet.

[–] [email protected] 4 points 10 months ago (7 children)

Sidnenote about the PI filesystem self-clobbering: Are you running off of an SD card? Running off an external SSD is way more reliable in my experience. Even a decent USB stick tends to be better than micro-SD in the long run, but even the cheapest external SSD blows both of them out of the water. Since I switched my PIs over to that, they've never had any disk-related issues.

[–] [email protected] 2 points 10 months ago

IMO set up a good incremental backup system with deduplication and then back up everything at least once a day as a baseline. Anything that's especially valuable can be backed up more frequently, but the price/effort of backing up at least once a day should become trivial if everything is set up correctly.

If you feel like hourly snapshots would be worth it, but too resource-intensive, then maybe replacing them with local snapshots of the file system (which are basically free, if your OS/filesystem supports them) might be reasonable. Those obviously don't protect against hardware failure, but help against accidental deletion.

[–] [email protected] 8 points 10 months ago* (last edited 10 months ago) (1 children)

What you describe is true for many file formats, but for most lossy compression systems the "standard" basically only strictly explains how to decode the data and any encoder that produces output that successfully decodes that way is fine.

And the standard defines a collection of "tools" that the encoders can use and how exactly to use, combine and tweak those tools is up to the encoder.

And over time new/better combinations of these tools are found for specific scenarios. That's how different encoders of the same codec can produce very different output.

As a simple example, almost all video codecs by default describe each frame relative to the previous one (I.e. it describes which parts moved and what new content appeared). There is of course also the option to send a completely new frame, which usually takes up more space. But when one scene cuts to another, then sending a new frame can be much better. A "bad" codec might not have "new scene" detection and still try to "explain the difference" to the previous scene, which can easily take up more space than just sending the entire new frame.

[–] [email protected] 6 points 10 months ago* (last edited 10 months ago)

Note that there is some reliability drawback of spinning hard disks on and off repeatedly. maybe unintuitively HDDs that spin constantly can live much longer than those that spend 90% of their time spun down.

This might not be relevant if you use only SSDs, and might never affect you, but it should be mentioned.

[–] [email protected] 3 points 11 months ago (1 children)

You know that you too are writing in a script, right?

view more: ‹ prev next ›