All the times I just put docker-compose.yml to one user (my user) directory and call it a day.

But what about a service with multiple admins or with more horizontally split up load?

  • ghulican@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Env variables get saved to 1Password (self hosted alternative would be Infisical) with a project for each container.

    Docker compose files get synced up to my GitHub account.

    I have been using the new “include” attribute to split up each container into its own docker compose file.

    Usually I organize by service type: media

    • sonarr
    • radarr downloaders
    • sab

    Not sure if that answers the question…

  • Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’ve been slowly moving all my containers from compose to pure Ansible instead. Makes it easier to also manage creating config files, setting permissions, cycling containers after updating files etc.

    I still have a few things in compose though and I use Ansible to copy updates to the target server. Secrets are encrypted with Ansible vault.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I had Portainer setup, but it was clunky and the web UI added little value.

    Now I just have a local git repo with a directory for each compose stack and run docker compose commands as needed. The repo holds all yaml and config files I care to keep track. Env variables are in gitignored .env files with similar .env.example in version control. I keep sensitive info in my password manager if I have to recreate a .env from its example counterpart.

    To handle volumes, I avoid docker-managed volumes at all costs to favor cleaner bind mounts instead. This way the data for each stack is always along with the corresponding configuration files. If I care about keeping the data, it’s either version controlled (when mostly text) or backed up with kopia (when mostly binary).

    • retrodaredevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I do something similar, but I avoid gitignore at all costs because any secret data should have root read only permissions on it. Plus any data that is not version controlled goes in a common directory, so all I have to do is backup that directory and I’m good. It makes moving between machines easy if I ever need to do that.

  • skadden@ctrlaltelite.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)

    All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.

    It’s only me they does all this though, I set it up this way for funsies.

  • Aux@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    It’s better to manage your infrastructure with Ansible.