this post was submitted on 05 Oct 2023
13 points (84.2% liked)

Selfhosted

40183 readers
512 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Update Oct 6 :

I have been messing around the idea of permissions but just got back from work so this is very much Work In Progress. What I noticed is that the "postgres" folder inside the Docker folder that has the docker-compose file had a lock icon on it. https://imgur.com/a/lZir4tt The owner is weird and doesn't exist on the other computer. I don't explain how this owner was created and that may be due to my poor understanding of Docker and docker-compose.

So I have made a pastebin with the docker compose here if anyone is interested in this little puzzle : https://pastebin.com/Vsh6S23G This docker-compose is basically the one from the installation guide from the app website, I just changed some passwords and users related stuff which are written as .

I tried using Déjà Dup Backups to sync my entire Docker folder which also contains Tandoor and it complained it could not sync the postgres folder either so defo something wrong with the permissions. Which explains why I can't create a new recipe on the other computer, because it doesn't have permission for that specific task. Oddly enough, I get a server 500 error, but if I refresh the recipe list, the new recipe that led to the server 500 error is, actually, there.

Would a pastebin of the .env help?

ORIGINAL POST :

Hello again, I hope it's okay that I make several posts in a rather short time, I'm stumped.

I run a series of containers on old computer A for the recipe manager Tandoor Recipes.

I want to move it to another computer B so I initially thought I would: -copy the env and docker-compose -dump the source database

Move everything to the new computer, compose everything and fill the database from the dump.

I got 500 server errors so I went on Discord and asked what was the proper way of doing this. I've been told in theory I could shut everything down, pack my Tandoor folder in a zip, paste it on the target computer B and boot everything and voilà.

None of this works properly.

I do manage to get an instance of Tandoor running on my new computer, it displays every recipe I had originally, but it has an issue when trying to create a new recipe. I get a white page with "Server Error (500)". It does not happen on the original Tandoor, despite being all the same files in theory.

I noticed that on my source computer, the postgres DB directory changes permission when I start the container, as well as the directory containing the recipe pictures. So I'm wondering if wrong permissions might be corrupting data while copying stuff?

Thanks

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (2 children)

As others have already mentioned, you are probably correct that it's a permission error. You could follow the already posted advice to use tools that maintain permissions like rsync, but fixing this botched backup manually could help you learn how to deal with permissions and that's a rather fundamental concept that anyone selfhosting would benefit from understanding.

If you decide to do this, I would recommend reading up on the concept of user and group permissions on linux and the commands that allow you to inspect ownership and permissions of directories and files as well as the UID and GID of users. Next step would be to understand how Docker handles permissions for mapped directories. You can get a few pointers from this short explanation by LSIO: https://docs.linuxserver.io/general/understanding-puid-and-pgid. Bear in mind that this is not a Docker standard, but something specific to LSIO Docker images. See also https://docs.docker.com/compose/compose-file/05-services/#long-syntax. This can also be set when using docker run by using the --user flag.

Logs can also help pinpoint the cause of the issue. The default docker compose setup in Tandoor's docs sets up several containers, one of which acts as a database (db_recipes based on postgres:15-alpine). Inspect that in real time using docker logs -f db_recipes to see the exact errors.

[–] [email protected] 2 points 1 year ago (1 children)

Thanks a lot! I certainly need to learn about permissions and docker mapped directories in general. This is still very unclear in my head and it prevents me from troubleshooting my own stuff which is frustrating. You're all very cool but I'd like to not post a lemmy every time an app has wrong permissions haha. I'll have a read.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

In response to your update: Try specifying the user that's supposed to own the mapped directories in the docker compose file. Then make sure the UID and GID you use match an existing user on the new system you are testing the backup on.

First you need to get the id of the user you want to run the container as. For a user called foo, run id foo. Note down the UID and GID.

Then in your compose file, modify the db_recipes service definition and set the UID and GID of the user that should own the mapped volumes:

  db_recipes:
    restart: always
    image: postgres:15-alpine
    user: "1000:1000" #Replace this with the corresponding UID and GID of your user
    volumes:
      - ./postgresql:/var/lib/postgresql/data
    env_file:
      - ./.env

Recreate the container using docker compose up -d (don't just restart it; you need to load the new config from the docker compose file). Then inspect the postgresql directory using ls -l to check whether it's actually owned by user with UID 1000 and group with GID 1000. This should solve the issue you are having with that backup program you're using. It's probably unable to copy that particular directory because it's owned by root:root and you're not running it as root (don't do that; it would circumvent the real problem rather than help you address it).

Now, when it comes to copying this to another machine, as already mentioned you could use something that preserves permissions like rsync, but for learning purposes I'd intentionally do it manually as you did before to potentially mess things up. On the new machine, repeat this process. First find the UID and GID of the current non-root user (or whatever user you want to run your containers as). Then make sure that UID and GID are set in the compose files. Then inspect the directories to make sure they have the correct ownership. If the compose file isn't honoring the user flag or if the ownership doesn't match the UID and GID you set for whatever reason, you can also use chown -R UID:GID ./postgresql to change ownership (replace UID:GID with the actual IDs), but that might get overwritten if you don't properly specify it in the compose file as well, so only do it for testing purposes.

Edit: I also highly recommend using CLIs (terminal) instead of the GUI for this sort of thing. In my experience, the GUIs aren't always designed to give you all the information you need and can actually make things more difficult for you.