AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).

So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.

Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 hours ago

    and now I don’t sound so fucking stupid for setting up local image caches on my self-hosted gitlab server.

  • lambalicious@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    22 hours ago

    Forgejo gives you a registry built-in.

    Also is it just me or does the docker hub logo look like it’s giving us the middle finger?

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      24
      ·
      23 hours ago

      Did they really? Oh my god please tell me your joking, that a company as modern as docker got a freaking oracle CEO. They pulled a Jack Barker. Did he bring his conjoined triangles of success?

  • PassingThrough@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 day ago

    Huh. I was just considering establishing a caching registry for other reasons. Ferb, I know what we’re going to do today!

  • Shading7104@feddit.nl
    link
    fedilink
    English
    arrow-up
    34
    ·
    1 day ago

    Instead of using a sort of Docker Hub proxy, you can also use GitHub’s repository or Quay. If the project allows it, you can easily switch to these alternatives. Alternatively, you can build the Docker image yourself from the source. It’s usually not a difficult process, as most of it is automated. Or what I personally would probably do is just update the image a day later if I hit the limit.

    • jaxxed@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      You can also host your own with harbor (or MSR v4 if you want a commercial product.) You can set them up to replicate upstream.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    22 hours ago

    I’m quite new to docker for NAS stuff - how many pulls would the average person do? like, i don’t think i even have 10 containers 🤨

    • Darkassassin07@lemmy.ca
      link
      fedilink
      English
      arrow-up
      22
      ·
      21 hours ago

      I’m running ~30 containers, but they don’t typically all get new updates at the same time.

      Updates are grabbed nightly, and I think the most I’ve seen update at once is like 6 containers.

      Could be a problem for setting up a new system, or experimenting with new toys.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        15 hours ago

        The problem is that the main container can (and usually does) rely on other layers, and you may need to pull updates for those too. Updating one app can take 5-10 individual pulls.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    22 hours ago

    Well shit, I still rely on Docker Hub even for automated pulls so this is just great. I guess i’m going back to managing VMs with OpenTofu and package managers.

    What are our alternatives if we use Podman or K8s?

    • wireless_purposely832@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      20 hours ago

      The issue isn’t Docker vs Podman vs k8s vs LXC vs others. They all use OCI images to create your container/pod/etc. This new limit impacts all containerization solutions, not just Docker. EDIT: removed LXC as it does not support OCI

      Instead, the issue is Docker Hub vs Quay vs GHCR vs others. It’s about where the OCI images are stored and pulled from. If the project maintainer hosts the OCI images on Docker Hub, then you will be impacted by this regardless of how you use the OCI images.

      Some options include:

      • For projects that do not store images on Docker Hub, continue using the images as normal
      • Become a paid Docker member to avoid this limit
      • When a project uses multiple container registries, use one that is not Docker Hub
      • For projects that have community or 3rd party maintained images on registries other than Docker Hub, use the community or 3rd party maintained images
      • For projects that are open source and/or have instructions on building OCI images, build the images locally and bypass the need for a container registry
      • For projects you control, store your images on other image registries instead of (or in addition to) Docker Hub
      • Use an image tag that is updated less frequently
      • Rotate the order of pulled images from Docker Hub so that each image has an opportunity to update
      • Pull images from Docker Hub less frequently
      • For images that are used by multiple users/machine under your supervision, create an image cache or image registry of images that will be used by your users/machines to mitigate the number of pulls from Docker Hub
      • Encourage project maintainers to store images on image registries other than Docker Hub (or at least provide additional options beyond Docker Hub)
      • Do not use OCI images and either use VM or bare metal installations
      • Use alternative software solutions that store images on registries other than Docker Hub
  • Mubelotix@jlai.lu
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    1 day ago

    If only they used a distributed protocol like ipfs, we wouldn’t be in this situation