wireless_purposely832

joined 2 months ago
[–] [email protected] 1 points 1 month ago

Open source projects still need a maintainer/owner. For example, Facebook "controls" react, Microsoft "controls" Visual Studio Code, and AdguardTeam "controls" AdGuardHome. There are several reasons to not trust a maintainer (eg, license changes, prioritizing or implementing undesired functionality and anti-features, converting to "open core", abandoning the project, selling out to less trust worthy entities, etc.).

Per Adguard's website, the legal entity behind the various AdGuard products is ADGUARD SOFTWARE LIMITED. A quick search on that company shows that there are 3 founders and they seem to have some ties to Russia. There is more information online about this, but whether this means they can be trusted or not is up to each potential user of one of the AdGuard products.

[–] [email protected] 2 points 1 month ago

I feel silly for not realizing that the SSH config would be used by Git!

I thought if Forgejo's SSH service listened to a non-standard port that you would have to do commands with the port in the command similar to below (following your example). I guess I assumed Git did not directly use the client's SSH service.

git pull [email protected]:1234:user/project.git
[–] [email protected] 1 points 1 month ago

There are plenty of valid reasons to want to use a reverse proxy for SSH:

  • Maybe there is a Forgejo instance and Gitea instance running on the same server.
  • Maybe there is a Prod Forgejo instance and Dev Forgejo instance running on the same server.
  • Maybe both Forgejo and an SFTP are running on the same server.
  • Maybe Forgejo is running in a cluster like Docker Swarm or Kubernetes
  • Maybe there is a desire to have Caddy act as a bastion host due to an inability to run a true bastion host for SSH or reduce maintenance of managing yet another service/server in addition to Caddy

Regardless of the reason, your last point is valid and the real issue here. I do not think it is possible for Caddy to reverse proxy SSH traffic - at least not without additional software (either on the client, server, or both) or some overly complicated (and likely less secure) setup. This may be possible if TCP traffic included SNI information, but unfortunately it does not.

[–] [email protected] 2 points 1 month ago (2 children)

people often seem to have a misinformed idea that the first item on your dns server list would be preferred and that is very much not the case

I did not know that. TIL that I am people!

Do you know if it's always this way? For example, you mentioned this is how it works for DNS on a laptop, but would it behave differently if DNS is configured at the network firewall/router? I tried searching for more info confirming this, but did not find information indicating how accurate this is.

[–] [email protected] 3 points 1 month ago (5 children)

Depending on the network's setup, having Pihole fail or unavailable could leave the network completely broken until Pihole becomes available again. Configuring the network to have at least one backup DNS server is therefore extremely important.

I also recommend having redundant and/or highly available Pihole instances running on different hardware if possible. It may also be a good idea to have an additional external DNS server (eg: 1.1.1.1, 8.8.8.8, 9.9.9.9, etc.) configured as a last resort backup in the event that all the Pihole instances are unavailable (or misconfigured).

[–] [email protected] 2 points 1 month ago (2 children)

The steps below are high level, but should provide an outline of how to accomplish what you're asking for without having to associate your IP address to any domains nor publicly exposing your reverse proxy and the services behind the reverse proxy. I assume since you're running Proxmox that you already have all necessary hardware and would be capable of completing each of the steps. There are more thorough guides available online for most of the steps if you get stuck on any of them.

  1. Purchase a domain name from a domain name registrar
  2. Configure the domain to use a DNS provider (eg: Cloudflare, Duck DNS, GoDaddy, Hetzner, DigitalOcean, etc.) that supports wild card domain challenges
  3. Use NginxProxyManager, Traefik, or some other reverse proxy that supports automatic certificate renewals and wildcard certificates
  4. Configure both the DNS provider and the reverse proxy to use the wildcard domain challenge
  5. Setup a local DNS server (eg: PiHole, AdGuardHome, Blocky, etc.) and configure your firewall/router to use the DNS server as your DNS resolver
  6. Configure your reverse proxy to serve your services via domains with a subdomain (eg: service1.domain.com, service2.domain.com, etc.) and turn on http (port 80) to https (port 443) redirects as necessary
  7. Configure your DNS server to point your services' subdomains to the IP address of your reverse proxy
  8. Access to your services from anywhere on your network using the domain name and https when applicable
  9. (Optional) Setup a VPN (eg: OpenVPN, WireGuard, Tailscale, Netbird, etc.) within your network and connect your devices to your VPN whenever you are away from your network so you can still securely access your services remotely without directly exposing any of the services to the internet
[–] [email protected] 5 points 2 months ago

This would only work if there is no other traffic on the port being used (eg: port 22). If both the host SSH service and Forgejo SSH service expect traffic on port 22, then this would not work since server name indication (SNI) is not provided with SSH traffic and Caddy would not be able to identify the appropriate destination for multiple SSH services traffic.

[–] [email protected] 1 points 2 months ago (1 children)

Are you able to provide some details on how you are doing this? I don't think you can do much with reverse proxies and SSH beyond routing all traffic on port 22 (or the configured SSH port) to whichever port SSH is listening on. In other words, the reverse proxy cannot route SSH traffic for the host on port 22 to the host, route SSH traffic for Forgejo on port 22 to Forgejo's SSH process, and SFTP traffic on port 22 to the SFTP process - at least not via domain name like a HTTP/HTTPS reverse proxy would work.

Instead, this would need to be done via IP address where the host SSH process listens on 192.168.1.2, the Forgejo SSH process listens on 192.168.1.3, and the SFTP process listens on 192.168.4. Otherwise, each of those services would need to use different ports.

[–] [email protected] 1 points 2 months ago

I believe the reverse proxy settings in your post is just configured to handle the http/https connection, not the SSH connection. Instead, SSH connections are likely being routed to the machine running Foegejo via DNS and your reverse proxy is not involved with anything related to SSH.

I assume you either have SSH disabled on your host or SSH on your host uses a port other than 22?

[–] [email protected] 4 points 2 months ago

The thing that makes casting so appealing for me is how ubiquitous it is. It eliminates situations with guests where they would recommend a show/movie only to find out that I can't easily play the content because it's only available on a streaming service that the guest pays for and I do not. As long as the guest brought a device and connected it to my WiFi, it more than likely could be casted without having to install another app and/or sign up for a new service (or have the guest login with their account).

I am becoming less optimistic about it though. I just do not think that the level of ubiquity that Chromecast reached even 10 years ago will be matched with a FOSS alternative. Developers would need to incorporate it into their apps, websites, etc. or it would need to be compatible with existing solutions. I doubt Google will open Chromecast up enough so other options can be fully compatible with it. Additionally, without the backing of a major corporation, I do not see developers taking the time to make their content compatible with another casting option.

[–] [email protected] 1 points 2 months ago

Agreed! I am concerned though that even if a viable casting alternative started gaining momentum that Google would essentially prevent it from being widely adopted or incorporated into apps/websites the way that Chromecast is. I think it would have to be created by a large tech or media company and/or be compatible with Chromecast.

Apps are still really frustrating though. If an app exists (big if), I found the apps to either miss key features compared to the corresponding apps on other platforms or the UI/UX was terrible for a TV app.

I could get by if just one of casting or the apps were comparable to more popular alternatives. Having neither makes it very difficult to moved away from those alternatives.

[–] [email protected] 12 points 2 months ago (4 children)

I do not think what I would want as a replacement exists (yet). My main requirements are:

  • Only FOSS software and firmware
  • Similar level of "casting" compatibility/ubiquity as the discontinued Chromecast
  • Easy navigation and/or great UI/UX
  • Can be controlled with a stand alone remote control, phone/tablet/laptop, and remote services like Home Assistant
  • As portable and low powered as the discontinued Chromecast (or no less portable than a small mini-pc)
  • Ability to turn on/off the TV, switch inputs, and control the volume
  • Ability to install apps/plugins to directly on the device (maybe even things like Lutris, Moonlight, or something similar for gaming)
    • Ideally, the apps would be as well maintained and provide similar levels of quality as something like an Android TV or Apple TV
  • (bonus) Ability to store media locally for offline playback

I think the closest I have seen is LibreELEC + Kodi on a RaspberryPi or mini-pc. It's still not quite there for my tastes though. Hopefully the recent Chromecast announcement will lead to more/better alternatives in the coming months!

 

I am running a bare metal Kubernetes cluster on k3s with Kube-VIP and Traefik. This works great for services that use SSL/TLS as Server Name Indication (SNI) can be used to reverse proxy multiple services listening on the same port. Consequently, getting Traefik to route multiple web servers receiving traffic on ports 80 or 443 is not a problem at all. However, I am stuck trying to accomplish the same thing for services that just use TCP or UDP without SSL/TLS since SNI is not included in TCP or UDP traffic.

I tried to setup Forgejo where clients will expect to use commands like git clone [email protected].... which would ultimately use SSH on port 22. Since SSH uses TCP and Traefik supports TCPRoutes, I should be able to route traffic to Forgejo's SSH entry point using port 22, but I ran into an issue where the SSH service on the node would receive/process all traffic received by the node instead of allowing Traefik to receive the traffic and route it. I believe that I should be able to change the port that the node's SSH service is listening on or restrict the IP address that the node's SSH service is listening on. This should allow Traefik to receive the traffic on port 22 and route that traffic to Forgejo's SSH entry point while also allowing me to SSH directly into the node.

However, even if I get that to work correctly, I will run into another issue when other services that typically run on port 22 are stood up. For example, I would not be able to have Traefik reverse proxy both Forgejo's SSH entry point and an SFTP's entry point on port 22 since Traefik would only be able to route all traffic on port 22 to just one service due to the lack of SNI details.

The only viable solution that I can find is to only run one service's entry point on port 22 and run each of the other services' entry points on various ports. For instance, Forgejo's SSH entry point could be port 22 and the SFTP's entry point could be port 2222 (mapped to the pod's port 22). This would require multiple additional ports be opened on the firewall and each client would need its configuration and/or commands modified to connect to the service's a non-standard port.

Another solution that I have seen is to use other services like stunnel to wrap traffic in TLS (similar to how HTTPS works), but I believe this will likely lead to even more problems than using multiple ports as every client would likely need to be compatible with those wrapper services.

Is there some other solution that I am missing? Is there something that I could do with Virtual IP addresses, multiple load balancer IP addresses, etc.? Maybe I could route traffic on Load_Balancer#1_IP_Address:22 to Forgejo's SSH entry point and Load_Balancer#2_IP_Address:22 to SFTP's entry point?

tl;dr: Is it possible to host multiple services that do not use SSL/TLS (ie: cannot use SNI) on the same port in a single Kubernetes cluster without using non-standard ports and port mapping?

view more: next ›