this post was submitted on 10 Jan 2025
82 points (95.6% liked)

Selfhosted

41084 readers
265 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

tldr: I'd like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I'm not sure what the best/safest way to do it is. Asking my partner to use tailsclae or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I'm kind of unsure what the best approach is. Hosting services on the internet has risk and I'd like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What's the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

(page 2) 49 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 4 days ago (2 children)

The biggest reason to use VPN is that some ISPs may take issue with you running a web server over a residential service when they see incoming HTTP requests to your IP. If you don't want to require VPN, then Cloudflare tunnels are perfect for this and they also solve the need for dynamic DNS if you want to use static domain because your domain points to the Cloudflare edge servers and they route it to you wherever your tunnel endpoint is running.

Past that, Traefik is a great reverse proxy that can manage getting LetsEnrcypt SSL certificates for you even with wildcard domains and would still work fine with dynamic DNS.

[–] [email protected] 2 points 4 days ago (1 children)

Do you mind giving a high level overview of what a Cloudlfare tunnel is doing? Like, what's connected to what and how does the data flow? I've seen cloudflare mentioned a few other times in the comments here. I know Cloudflare offers DNS services via their 1.1.1.1 and 1.0.0.1 IPs and I also know they somehow offer DDoS protection (although I'm not sure how exactly. caching?). However, that's the limit of my knowledge of Cloudflare

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 5 points 4 days ago (1 children)

I use nginx proxy manager and let's encrypt with a porkbun domain, was very easy to set up for me. Never tried caddy/traefik/etc though. Geo blocking happens on my OPNsense with the built in tools.

[–] [email protected] 0 points 4 days ago (1 children)

Do you have instructions on how you set that up?

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago) (1 children)

At a high level you forward ports 80 and 443 to NPM from your router. In NPM you set up your proxy by IP address and port and you can also set up automatic SSL certs when you create the proxy via letsencrypt. I also run a DDNS auto update that tells porkbun if my IP changes. I'd be happy to get into some more specifics if there's a particular spot you're stuck. This is all assuming you have a public IPv4 and aren't behind cgnat. If you have cgnat you're not totally fucked but it makes it more complicated. If it's OPNsense related struggles that shit is mysterious to me, I've only been running it a few weeks and it's not fully configured. Still learning.

[–] [email protected] 1 points 4 days ago* (last edited 4 days ago) (2 children)

Why am I forwarding all http and https traffic from WAN to a single system on my LAN? Wouldn't that break my DNS?

[–] [email protected] 2 points 3 days ago

You would be forwarding ingress traffic(traffic not originating from your internal network) to 443/80, this doesn't affect egress requests(requests from users inside your network requesting external sites) so it wouldn't break your internal DNS resolution of sites. All traffic heading to your router from outside origins would be pushed to your reverse proxy where you can then route however you please to whatever machine/port your apps live on.

[–] [email protected] 1 points 4 days ago* (last edited 4 days ago)

The reverse proxy is th single system because it tells the incoming traffic where to go. It also doesn't really do anything unless the incoming traffic is requesting one of the domains you set up. it doesn't affect your internal DNS. You are able to redirect from the public address to your internal server through DNS though.

[–] [email protected] 4 points 4 days ago* (last edited 4 days ago) (1 children)

On my home network I have nginxproxymanager running let's encrypt with my domain for https, currently only for vaultwarden (I'm testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that's cheap.

For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need "something they have"[the relay] and "something they know" [login credentials] to get at my stuff. I won't implement biometrics for "something they are". This is post hoc justification though, and nonesense to boot. I don't want to expose a port and a VPS has low WAF and I'm not installing tailscale on all of their devices so s relay is an unhappy compromise.

For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.

If there are many ways to skin a cat I believe I chose to use a spoon, don't be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate... I have checklists. One day I'll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay... That'll be my impetus to learn how to write a script.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago) (1 children)

That’ll be my impetus to learn how to write a script.

This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That's surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it's own. I haven't tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:

open the terminal on the server and type

mkdir scripts

cd scripts

nano docker-updates.sh

type something along the lines of this (I'm still learning docker so adjust the commands to your needs)

#!/bin/bash

cd /path/to/scripts/docker-compose.yml
docker compose pull && docker compose up -d
docker image prune -f

save the file and then type sudo chmod +x ./docker-updates.sh to make it executable

and finally set up a cronjob to run the script at specific intervals. type

crontab -e

or

sudo crontab -e (this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn't be needed)

and at the bottom of the file type this and save, that's it:

# runs script at 1am on the first of every month
0 1 1 * * /path/to/scripts/docker-updates.sh

this website will help you choose a different interval

For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type "sudo" or not; it's running as root so I don't think you need it but maybe try it with sudo in front of both "apt"s if it's not working. Also use whatever package manager you have if you aren't using apt)

while in the scripts folder you created earlier

nano os-updates.sh

#!/bin/bash

apt update -y && apt upgrade -y
reboot now

save and don't forget to make it exectuable

then use

sudo crontab -e (because you'll need root privileges to update. this will run the script as root without requiring you to input your password)

# runs script at 12am on the first of every month
0 0 1 * * /path/to/scripts/os-updates.sh
[–] [email protected] 2 points 4 days ago* (last edited 4 days ago) (2 children)

I did think about cron but, long ago, I heard it wasn't best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.

I've got automatic-upgrades running on stuff so it's mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It's just the monthly routine of "apt update && apt upgrade -y" *5 that sucks.

Thank you for the advice though. I'll probably set cron to update the images with the script as you suggest. I have a "maintenance" homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone's dockge, pihole and nginx but the pings were a happy accident.

load more comments (2 replies)
[–] [email protected] 4 points 4 days ago (1 children)
  1. I got started with a guide from these guys back in 2020. I still use traefik as my reverse proxy and Authelia for authentication and it has worked great all this time. As someone else said, everything is in containers on the one host and it is super easy this way. It all runs on a single box using containers for separation. I should probably look into a secondary server as a live backup, but that’s a lot of work / expense. I have a Cloudflare dynamic DNS container running for that.
  2. I would definitely advocate for owning your own domain, for the added use case of owning your own email addresses. I can now switch email providers and don’t have to worry about losing anything. This would also lean towards a more memorable domain, or at least a second domain that is memorable. Stay away from the country TLDs or “cute” generic TLDs and stay with a tried and true .com or .net (which may take some searching).
  3. I don’t bother with this, I just run my server behind Cloudflare, and let them protect my server. Some might disagree, but it’s easy for me and I like that.
  4. Containers, containers, containers! Probably Docker since it’s easy, but Podman if you really want to get fancy / extra secure. Also, make sure you have a git repo for your compose files, and a solid backup strategy from the start (so much easier than going back and doing it later). I use Backblaze for my backups and it’s $2/month for some peace of mind.
  5. Do it!!!
load more comments (1 replies)
[–] [email protected] 3 points 4 days ago (2 children)

Tailscale is completely transparent on any devices I've used it on. Install, set up, and never look at it again because unless it gets turned off, it's always on.

[–] [email protected] 2 points 4 days ago (1 children)

I've run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn't happen often but it is often enough that I've started to notice. I'm not sure if it's a network issue or app issue but during that time, I can't connect to my services. All that to say, my tolerance for that is higher than my partner's; the first time something didn't work, they would stop using it lol

[–] [email protected] 2 points 4 days ago

So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I'm the only one with Android. I've never had a complaint except one person that couldn't get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that's the closest I've seen to your problem.

I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don't think tailscale has a keepalive function like a wireguard connection. If that's too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won't go idle. Tailscale is probably wanting to reduce their overhead so they don't include a keepalive.

[–] [email protected] 1 points 4 days ago
[–] [email protected] 2 points 4 days ago (1 children)

nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well

[–] [email protected] 1 points 3 days ago (7 children)

I know I should learn NixOS, I even tried for a few hours one evening but god damn, the barrier to entry is just a little too high for me at the moment 🫤

load more comments (7 replies)
[–] [email protected] 3 points 4 days ago

I use this https://github.com/ZoeyVid/NPMplus. I use unifi for goe-blocking.

[–] [email protected] 2 points 4 days ago

I used to do a reverse proxy setup with caddy , but now I self host a Wireguard VPN. It has access to Nextcloud on the same machine, Home Assistant and Kodi on another. On our phones, Wireguard only has access to certain apps the rest of the network traffic is normal, so a nice simple setup.

[–] [email protected] 2 points 4 days ago (2 children)
[–] [email protected] 3 points 4 days ago (1 children)

I presume you're referring to Cloudflare tunnel?

[–] [email protected] 2 points 4 days ago

Yep, cloudflare tunnel / Zero trust.

Dead easy to set up.

[–] [email protected] 1 points 3 days ago (1 children)
[–] [email protected] 1 points 3 days ago* (last edited 3 days ago) (1 children)
[–] [email protected] 1 points 3 days ago (2 children)
load more comments (2 replies)
[–] [email protected] 2 points 4 days ago

I use nginx manager in its own docker container on my unraid server. Was pretty simple to set up all things considered. I would call myself better with hardware than software but not a complete newb and I got it running with minimal headache.

[–] [email protected] 2 points 4 days ago (1 children)

I've tried 3 times so far in Python/gradio/Oobabooga and never managed to get certs to work or found a complete visual reference guide that demonstrates a complete working example like what I am looking for in a home network. (Only really commenting to subscribe to watch this post develop, and solicit advice:)

[–] [email protected] 3 points 4 days ago* (last edited 1 day ago) (1 children)

I've played around with reverse proxies and ssl certs and the easiest method I've found so far was docker. Just haven't put anything in production yet. If you don't know how to use docker, learn, it's so worth it.

Here is the tutorial I used and the note I left for myself. You'll need a domain to play around with. Once you figure out how to get NGINX and certbot set up, replacing the helloworld container with a different one is relatively straight forward.

DO NOT FORGET, you must give certbot read write permissions in the docker-compose.yml file which isn't shown in this tutorial
-----EXAMPLE, NOT PRODUCTION CODE----

    nginx:
        container_name: nginx
        restart: unless-stopped
        image: nginx
        depends_on:
            - helloworld
        ports:
            - 80:80
            - 443:443
        volumes:
            - ./nginx/nginx.conf:/etc/nginx/nginx.conf
            - ./certbot/conf:/etc/letsencrypt:ro
            - ./certbot/www:/var/www/certbot:ro

    certbot:
      image: certbot/certbot
      container_name: certbot
      volumes: 
        - ./certbot/conf:/etc/letsencrypt:rw
        - ./certbot/www:/var/www/certbot:rw
      command: certonly --webroot -w /var/www/certbot --keep-until-expiring --email *email* -d *domain1* -d *domain2* --agree-tos
[–] [email protected] 2 points 4 days ago (1 children)

I'd add that Traefik works even better with Docker because you tag your other containers that have web ports and Traefik picks that up from Docker and terminates the SSL connection for them. You don't even have to worry about setting up SSL on every individual service, Traefik will take care of that even for services that don't implement SSL.

[–] [email protected] 1 points 4 days ago (1 children)

You don’t even have to worry about setting up SSL on every individual service

I probably need to look into it more but since traefik is the reverse proxy, doesn't it just get one ssl cert for a domain that all the other services use? I think that's how my current nginx proxy is set up; one cert configured to work with the main domain and a couple subdomains. If I want to add a subdomain, if I remember correctly, I just add it to the config, restart the containers, and certbot gets a new cert for all the domains

[–] [email protected] 2 points 3 days ago

Traefik basically has certbot built in so when you configure a new hostname on a service it automatically handles requesting and refreshing the cert for you. It can either request individual certificates for each hostname or a wildcard certificate (*.yourdomain.com) that covers all subdomains.

The neat trick is that in Docker you configure Traefik by adding Docker tags to the other containers you want to proxy. When you start up a container, Traefik automatically reads the config from the tags, does any necessary setup, then viola it's ready to go!

load more comments
view more: ‹ prev next ›