fenndev

joined 11 months ago
[–] [email protected] 1 points 12 hours ago

I just spun up a FreshRSS container and it is working flawlessly for that purpose so far. I appreciate the suggestions.

[–] [email protected] -4 points 1 day ago (5 children)

It's hosted, but not self-hosted.

[–] [email protected] 1 points 1 day ago

Linkwarden doesn't appear to support RSS, which is a massive bummer.

[–] [email protected] 2 points 1 day ago

I was just looking at this, actually. For a moment I thought it was going to be a bust but then I saw there is a preference option to open the readable form of a page by default. I also love PWAs...

27
submitted 1 day ago* (last edited 12 hours ago) by [email protected] to c/[email protected]
 

I'm looking for a self-hosted alternative for Omnivore. To keep it short and sweet, I'm looking for an app that I can subscribe to RSS feeds from and maintain Reader Mode-esque archives of news articles and interesting things I've read. Obsidian integration would be nice but is not a priority; however, the ability to save from Android is a must.

Hoarder is something I've recently spun up on my home server but despite looking great, it doesn't do what I'd like it to do. Clicking on an article doesn't present me with a Reader Mode archive, it takes me to the actual webpage; I have to click on something else to get the cached version (and even then, it doesn't format things in the way I'd like). I feel this order of operations should be reversed. On the mobile app, you can't even access the cached version.

I've used Wallabag before, but disliked the mobile interface. I wasn't self-hosting, however, so I'm not sure the difficulty level for it. Barring finding anything better, I'll likely try and self-host Wallabag.

Shiori looks fantastic but I'd rather not resort to using Termux on my Android phone to share content. No mobile app makes it difficult.

Any suggestions?

SOLVED

Following numerous suggestions, I spun up a FreshRSS container and will be looking into both Shiori (which has a third-party mobile app) and Linkwarden. Thanks, everyone!

[–] [email protected] 26 points 1 week ago (2 children)

"Life is going to continue on just fine" - unless you're a woman (bans on contraceptives, loss of bodily autonomy), queer (rolling back protections for LGBTQ+ people, penalizing even talking about them), non-Christian, a minority...

[–] [email protected] 15 points 4 months ago

I don't think one currently exists, but it would be an interesting project. There are plenty of trackers for CVEs but in terms of project ethics, acquisitions, etc., there's a space for it.

The two main problems I can see are:

  1. How do you define 'negative'? An open source application being acquired is often a bad thing, but not always. An acquisition by FUTO is more likely to be viewed positively than an acquisition by Microsoft, but either can be interpreted positively or negatively depending on the person.

  2. Community involvement is absolutely critical. If I were running a service like this (for example), I would only really be keeping up on the services I use and care about. I would need others to submit info and then verify it.

[–] [email protected] 1 points 5 months ago (1 children)

Sorry, I should clarify. I'm hoping to possibly have a setup like this:

  1. Browser makes a request to an eepsite
  2. The router sees the request is to a domain ending in .i2p and forwards the request to a service running on the router
  3. That service then performs the necessary encryption and establishes connection with the I2P network.

I'd imagine it's a similar process for other protocols and networks. No idea if this is possible or desirable.

 

TL; DR: Is it possible (and if so, desirable) to configure my OPNsense router to handle non-standard traffic instead of needing to configure each client device manually? Examples of what I mean by 'non-standard traffic' include Handshake, I2P, ZeroNet, and Tor.

[–] [email protected] 4 points 5 months ago (5 children)

Any issues lately with your network? When DNS is down or having issues, Firefox and forks take forever to start up.

[–] [email protected] 2 points 5 months ago (1 children)

Sorry, I should clarify - the list is items I believe I'll likely need and the question marks indicate that I'm not sure if they're necessary or that I'm not sure the specifics about what I should get. For example, I'm sure I need resistors, but I'm not sure if I need everything from 1Ω to 1MΩ, or which ICs to get. I was also unsure if I should get a variable power supply. Hopefully that makes more sense?

 

I'm new to electronics and looking to assemble an array of components and tools for working on and designing electronics & circuits. Something immediately apparent is that all of the widely available kits orient you towards working with microcontrollers and SBCs; these kits are cool, but I want to have a halfway decent understanding of the underlying analog components and circuit design before I go digital.

With that in mind, what should I get? If anyone could specify specifics to look into, I'd really appreciate that! Thanks for the help.

Current list

  • A decent breadboard
  • Jumper wires
  • Multimeter
  • Batteries
  • Variable Power Supply?
  • Assorted resistors (1Ω-?)
  • Capacitors (Electrolytic and ceramic?)
  • Various ICs?
  • Transistors?
  • Diodes, probably?
  • Potentiometers
[–] [email protected] 7 points 5 months ago* (last edited 5 months ago)

I hope eventually we get an ARM-powered Framework.

Bought a Framework shortly after Linus Techmin joined forces with them. Was stolen out of my partner's car a few months later and just haven't been able to justify (or afford) a replacement.

[–] [email protected] 3 points 6 months ago (1 children)

Oh. You're right. That worked. I feel really silly that I missed that.

Thank you so much!

[–] [email protected] 2 points 6 months ago (3 children)

I have both web and websecure set up as entrypoints.

8
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 

Edit: Thanks for the help, issue was solved! Had Traefik's loadbalancer set to route to port 8081, not the internal port of 80. Whoops.

Intro

HI everyone. I've been busy configuring my homelab and have run into issues with Traefik and Vaultwarden running within Podman. I've already successfully set up Home Assistant and Homepage but for the life of me cannot get things working. I'm hoping a fresh pair of eyes would be able to spot something I missed or provide some advice. I've tried to provide all the information and logs relevant to the situation.

Expected Behavior:

  1. Requests for *.fenndev.network are sent to my Traefik server.
  2. Incoming HTTPS requests to vault.fenndev.network are forwarded to Vaultwarden
    • HTTP requests are upgraded to HTTPS
  3. Vaultwarden is accessible via https://vault.fenndev.network and utilizes the wildcard certificates generated by Traefik.

Quick Facts

Overview

  • I'm running Traefik and Vaultwarden in Podman, using Quadlet
  • Traefik and Vaultwarden, along with all of my other services, are part of the same fenndev_default network
  • Traefik is working correctly with Home assistant, Adguard Home, and Homepage, but returns a 502 Bad Gateway error with Vaultwarden
  • I've verified that port 8081 is open on my firewall and my service is reachable at {SERVER_IP}:8081.
  • 10.89.0.132 is the internal Podman IP address of the Vaultwarden container

Versions

Server: AlmaLinux 9.4

Podman: 4.9.4-rhel

Traefik: v3

Vaultwarden: alpine-latest (1.30.5-alpine I believe)

Error Logs

Traefik Log:

2024-05-11T22:09:53Z DBG github.com/traefik/traefik/v3/pkg/server/service/proxy.go:100 > 502 Bad Gateway error="dial tcp 10.89.0.132:8081: connect: connection refused"

cURL to URL:

[fenndev@bastion ~]$ curl -v https://vault.fenndev.network
*   Trying 192.168.1.169:443...
* Connected to vault.fenndev.network (192.168.1.169) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/pki/tls/certs/ca-bundle.crt
* TLSv1.0 (OUT), TLS header, Certificate Status (22):

Config Files

vaultwarden.container file:

[Unit]
Description=Password 
After=network-online.target
[Service]
Restart=always
RestartSec=3

[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

[Container]
Image=ghcr.io/dani-garcia/vaultwarden:latest-alpine
Exec=/start.sh
EnvironmentFile=%h/.config/vault/vault.env
ContainerName=vault
Network=fenndev_default

# Security Options
SecurityLabelType=container_runtime_t
NoNewPrivileges=true                                    
# Volumes
Volume=%h/.config/vault/data:/data:Z

# Ports
PublishPort=8081:80

# Labels
Label=traefik.enable=true
Label=traefik.http.routers.vault.entrypoints=web
Label=traefik.http.routers.vault-websecure.entrypoints=websecure
Label=traefik.http.routers.vault.rule=Host(`vault.fenndev.network`)
Label=traefik.http.routers.vault-websecure.rule=Host(`vault.fenndev.network`)
Label=traefik.http.routers.vault-websecure.tls=true
Label=traefik.http.routers.vault.service=vault
Label=traefik.http.routers.vault-websecure.service=vault

Label=traefik.http.services.vault.loadbalancer.server.port=8081

Label=homepage.group="Services"
Label=homepage.name="Vaultwarden"
Label=homepage.icon=vaultwarden.svg
Label=homepage.description="Password Manager"
Label=homepage.href=https://vault.fenndev.network

vault.env file:

LOG_LEVEL=debug
DOMAIN=https://vault.fenndev.network 
 

cross-posted from: https://leminal.space/post/6179210

I have a collection of about ~110 4K Blu-Ray movies that I've ripped and I want to take the time to compress and store them for use on a future Jellyfin server.

I know some very basics about ffmpeg and general codec information, but I have a very specific set of goals in mind I'm hoping someone could point me in the right direction with:

  1. Smaller file size (obviously)
  2. Image quality good enough that I cannot spot the difference, even on a high-end TV or projector
  3. Preserved audio
  4. Preserved HDR metadata

In a perfect world, I would love to be able to convert the proprietary HDR into an open standard, and the Dolby Atmos audio into an open standard, but a good compromise is this.

Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format, any tips or pointers?

 

I have a collection of about ~110 4K Blu-Ray movies that I've ripped and I want to take the time to compress and store them for use on a future Jellyfin server.

I know some very basics about ffmpeg and general codec information, but I have a very specific set of goals in mind I'm hoping someone could point me in the right direction with:

  1. Smaller file size (obviously)
  2. Image quality good enough that I cannot spot the difference, even on a high-end TV or projector
  3. Preserved audio
  4. Preserved HDR metadata

In a perfect world, I would love to be able to convert the proprietary HDR into an open standard, and the Dolby Atmos audio into an open standard, but a good compromise is this.

Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format, any tips or pointers?

 

cross-posted from: https://leminal.space/post/4761745

Shortly before the recent removal of Yuzu and Citra from Github, attempts were made to back up and archive both Github repos; it's my understanding that these backups, forks, etc. are fairly incomplete, either lacking full Git history or lacking Pull Requests, issues, discussions, etc.

I'm wondering if folks here have information on how to perform thorough backups of public, hosted git repos (e.g. Github, Gitlab, Codeberg, etc.). I'd also like to automate this process if I can.

git clone --mirror is something I've looked into for a baseline, with backup-github-repo looking like a decent place to start for what isn't covered by git clone.

The issues I can foresee:

  • Each platform builds its own tooling atop Git, like Issues and Pull Requests from Github
  • Automating this process might be tricky
  • Not having direct access/contributor permissions for the Git repos might complicate things, not sure

I'd appreciate any help you could provide.

14
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]
 

Shortly before the recent removal of Yuzu and Citra from Github, attempts were made to back up and archive both Github repos; it's my understanding that these backups, forks, etc. are fairly incomplete, either lacking full Git history or lacking Pull Requests, issues, discussions, etc.

I'm wondering if folks here have information on how to perform thorough backups of public, hosted git repos (e.g. Github, Gitlab, Codeberg, etc.). I'd also like to automate this process if I can.

git clone --mirror is something I've looked into for a baseline, with backup-github-repo looking like a decent place to start for what isn't covered by git clone.

The issues I can foresee:

  • Each platform builds its own tooling atop Git, like Issues and Pull Requests from Github
  • Automating this process might be tricky
  • Not having direct access/contributor permissions for the Git repos might complicate things, not sure

I'd appreciate any help you could provide.

view more: next ›