this post was submitted on 19 Dec 2024
37 points (86.3% liked)

Selfhosted

40696 readers
304 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Yo,

Wandering what the limit is when it comes to how many containers I can run. Currently I'm running around 15 containers. What happens if this is increased to say, 40? Also, can docker containers go "idle" when not being used - to save system resources?

I'm running a i7-6700k Intel cpu. Doesn't seem to be struggling at all with my current setup at least, maybe only when transcoding for Jellyfin.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 days ago

You can't really make them go idle, save by restarting them with a do-nothing command like tail -f /dev/null. What you probably want to do is scale a service down to 0. This leaves the declaration that you want to have an image deployed as a container, "but for right now, don't stand any containers up".

If you're running a Kubernetes cluster, then this is pretty straightforward: just edit the deployment config for the service in question to set scale: 0. If you're using Docker Compose, I believe the value to set is called replicas and the default is 1.

As for a limit to the number of running containers, I don't think it exists unless you're running an orchestrator like AWS EKS that sets an artificial limit of... 15 per node? I think? Generally you're limited only by the resources availabale, which means it's a good idea to make sure that you're setting limits on the amount of RAM/CPU a container can use.