rentar42

joined 1 year ago
[–] [email protected] 18 points 11 months ago (1 children)

This feels like a XY problem. To be able to provide a useful answer to you, we'd need to know what exactly you're trying to achieve. What goal are you trying to achieve with the VPN and what goal are you trying to achieve by using the client IP?

[–] [email protected] 1 points 11 months ago (1 children)

Note that just because everything is digital doesn't mean something like that isn't necessary: If you depend on your service provider to keep all of your records then you will be out of luck once they ... stop liking you, go out of business, have a technical malfunction, decide they no longer want to keep any records older than X years, ...

So even in a all-digital world I'd still keep all the PDF artifacts in something like that.

And I also second the suggestion of paperless-ngx (even though I'm not using it for very long yet, but it's working great so far).

[–] [email protected] 2 points 11 months ago

Ask yourself what your "job" in the homelab should be: do you want to manage what apps are available or do you want to be a DB admin? Because if you are sharing DB-containers between multiple applications, then you've basically signed up to checking the release notes of each release of each involved app closely to check for changes like this.

Treating "immich+postgres+redis+..." as a single unit that you deploy and upgrade together makes everything simpler at the (probably small) cost of requiring some more resources. But even on a 4GB-ram RPi that's unlikely to become the primary issue soon.

[–] [email protected] 1 points 11 months ago

There's many different ways with different performance tradeoffs. for example for my Homeland server I've set it up that I have to enter it every boot, which isn't often. But I've also set it up to run a ssh server so I can enter it remotely.

On my work laptop I simply have to enter it on each boot, but it mostly just goes into suspend.

One could also have the key on a usb stick (or better use a yubikey) and unplug that whenever is reasonable.

[–] [email protected] 23 points 11 months ago

Just FYI: the often-cited NIST-800 standard no longer recommends/requires more than a single pass of a fixed pattern to clear magnetic media. See https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-88r1.pdf for the full text. In Appendix A "Guidelines for Media Sanitation" it states:

Overwrite media by using organizationally approved software and perform verification on the
overwritten data. The Clear pattern should be at least a single write pass with a fixed data value,
such as all zeros. Multiple write passes or more complex values may optionally be used.

This is the standard that pretty much birthed the "multiple passes" idea, but modern HDD technology has made that essentially unnecessary (unless you are combating nation-state-sponsored attackers, in which case you should be physically destroying anything anyway, preferably using some high-heat method).

[–] [email protected] 11 points 11 months ago (4 children)

it's not much use now, but to basically avoid the entire issue just use whole disk encryption the next time. Then it's basically pre-wiped as soon as you "lose" the encryption key. Then simply deleting the partition table will present the disk as empty and there's no chance of recovering any prior content.

[–] [email protected] 6 points 11 months ago (1 children)

That saying also means something else (and imo more important): RAID doesn't protect against accidental or malicious deletion/modification. It only protects against data loss due to hardware fault.

If you delete stuff or overwrite it then RAID will dutifully duplicate/mirror/parity-check that action, but doesn't let you go back in time.

Thats the same reason why just syncing the data automatically to another target also isn't the same as a full backup.

[–] [email protected] 10 points 11 months ago* (last edited 11 months ago) (1 children)

That being said: backing up to a single, central, local location and then syncing those backups to some offsite location can actually be very efficient (and avoids having to spread the credentials for whatever off-site storage you use to multiple devices).

[–] [email protected] 2 points 11 months ago (1 children)

Raid 5 with 3 drives survives one dying disk. Raid 1 (mirroring) with 2 disks survives one dying disk. if either setup loses two disks all the data is gone.

When you run 3 disks then the odds of two failing are higher than if you run 2 disks.

So 3 disks are not significantly safer and might even be worse.

That being said: both setups are fine for home use, because you've set up real backups anyway, right?

[–] [email protected] 7 points 11 months ago (1 children)

I'm using encrypted ZFS as the root partition on my server and I've (mostly) followed the instructions in point #15 from here: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html

This starts dropbear as an SSH server that only has a single task: when someone logs in to it they get asked for the decryption key of the root partition.

I suspect that this could be adopted to whatever encryption mechanism you use.

I didn't follow it exactly, because I didn't want the "real" SSH host keys of the host to be accessible unencrypted in the initrd, so the "locked host" has a different SSH host key than when it is fully booted, which is preferred for me.

[–] [email protected] 4 points 11 months ago (1 children)

You don't need a dedicated git server if you just want a simple place to store git. Simply place a git repository on your server and use ssh://yourserver/path/to/repo as the remote URL and you can push/pull.

If you want more than that (i.e. a nice Web UI and user management, issue tracking, ...) then Gitea is a common solution, but you can even run Gitlab itself locally.

[–] [email protected] 28 points 11 months ago (4 children)

"Use vim in SSH" is not a great answer to asking for a convenient way to edit a single file, because it requires understanding multiple somewhat-complex pieces of technology that OP might not be familiar with and have a reasonably steep learning curve.

But I'd still like to explain why it pops up so much. And the short version is very simple: versatility.

Once you've learned how to SSH into your server you can do a lot more than just edit a file. You can download files with curl directly to your server, you can move around files, copy them, install new software, set up an entire new docker container, update the system, reboot the system and many more things.

So while there's definitely easier-to-use solutions to the one singular task of editing a specific file on the server, the "learn to SSH and use a shell" approach opens up a lot more options in the future.

So if in 5 weeks you need to reboot the machine, but your web-based-file-editing tool doesn't support that option, you'll have to search for a new solution. But if you had learned how to use the shell then a simple "how do I reboot linux from the shell" search will be all that you need.

Also: while many people like using vim, for a beginner in text based remote management I'd recommend something simpler like nano.

view more: ‹ prev next ›