• LedgeDrop@lemm.ee
      link
      fedilink
      English
      arrow-up
      117
      ·
      2 months ago

      Begins?!? Docker Inc was waist deep in enshittification the moment they started rate limiting docker hub, which was nearly 3 or 4 years ago.

      This is just another step towards the deep end. Companies that could easily move away from docker hub, did so years ago. The companies that remain struggle to leave and will continue to pay.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        3
        ·
        2 months ago

        When that happened our DevOps teams migrated all our prod k8’s to podman, with zero issues. Docker who?

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          21
          ·
          2 months ago

          Why would anybody use podman for k8s…containerd is the default for years.

          • 1984@lemmy.today
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 months ago

            Maybe you can run containerd with podman… I haven’t checked. I just run k3s myself.

            • sudneo@lemm.ee
              link
              fedilink
              English
              arrow-up
              7
              ·
              2 months ago

              Yeah, but you don’t need anything besides the runtime with kubernetes. Podman is completely unnecessary since kubelet does the container orchestration based on Kubernetes control plane. Running podman is like running docker, unnecessary attack surface for an API that is not used by anybody (in Kubernetes).

              I run k0s at home, FWIW, tried k3s too :)

              • 1984@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Yeah I know.

                Interesting that you run k0s, hadn’t heard about it. Would you mind giving a quick review and compare it to k3s, pros and cons?

                • sudneo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 months ago

                  I can’t really make an exhaustive comparison. I think k3s was a little too opinionated for my taste, with lots of rancher logic in it (paths, ingress, etc.). K0s was a little more “bare”, and I had some trouble in the past with k3s with upgrading (encountered some error), while with k0s so far (about 2 years) I never had issues. k0s also has some ansible role that eases operations, I don’t know if now also k3s does. Either way, they are quite similar overall so if one is working for you, rest assured you are not missing out.

        • gencha@lemm.ee
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          Your choice of container runtime has zero impact on the rate-limits of Docker Hub. They probably had a container image proxy already and just switched because Docker is a security nightmare and needlessly heavy.

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    124
    ·
    2 months ago

    Folks, the docker runtime is open source, and not even the only one of its kind. They won’t charge for that. If they tried to make it closed source, everyone would just laugh and switch to one of several completely free alternatives. They charge for hosting images, build time on their build servers, and various “premium” developer tools you don’t need. In fact, you need none of this, you can do all of it yourself on whatever hardware you deem to be good enough. There are also many other hosted alternatives out there.

    Docker thinks they have a monopoly, for some reason. If you use the technology, you are probably already aware that they don’t.

      • cheet@infosec.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        Windows container runtime is free as well, simply install the docker runtime from chocolatey or winget along with the Windows Containers and Hyper-V windows features. This is what we do on some build machines for CI.

        Theres no reason to use desktop other than “ease of use”

        • TrumpetX@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          There are some reasons. Networking can get messed up, so Docker Desktop “fixed that” for you, but the dirty secret is it’s basically a Linux VM with Docker CE and some convenience network routes.

          • cheet@infosec.pub
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            Youre talking about Linux containers on Windows, I think commenter above was referring to windows containers on Windows, which is its own special hell for lucky folks like me.

            Otherwise I totally agree. Ive done both setups without docker desktop.

      • Sir Aramis@lemmy.ca
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        2 months ago

        I second Podman. I’ve been using it recently and find it to be pretty good!

        • barsquid@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          I am getting into Podman but I cannot force my firewall to respect it for some reason.

      • mosiacmango@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        2 months ago

        Rancher is owned by Suse, which is mainly a solid steward in the community.

        They also have k8 frontend called Harvestor. It can run VMs directly, which is nice.

        • Scribbd@feddit.nl
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          Well, there is this one thing: they asked OpenSuse to drop the Suse branding…

          • bizarroland@fedia.io
            link
            fedilink
            arrow-up
            16
            ·
            2 months ago

            Which is fair. Fedora never called itself red hat. CentOS never called itself red hat.

            Suse is a pretty good company and deserves the right to their intellectual property and trademarks. OpenSuse shouldn’t make a big deal out of simply changing their name.

            They could rename themselves to OpenSusame and keep rolling without any issues whatsoever.

            • Petter1@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 months ago

              Of course, but I still think it is not very smart from SUSE, since I bet many companies got into SUSE because coworkers had very good experiences with OpenSUSE.

              I, at least, if my company would need corporate Linux, would recommend SUSE to my company because of that reason.

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      So does this setup like a one-node kubernetes cluster on your local machine or something? I didn’t know that was possible.

      • chameleon@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        Basically yes. Rancher Desktop sets up K3s in a VM and gives you a kubectl, docker and a few other binaries preconfigured to talk to that VM. K3s is just a lightweight all-in-one Kubernetes distro that’s relatively easy to set up (of course, you still have to learn Kubernetes so it’s not really easy, just skips the cluster setup).

    • Nithanim@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I am exposing docker via tcp in wsl and set the env var on the host to point to it. A bit more manual but if you don’t need anything special, it works too.

      • KellysNokia@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        That gives me an idea - managers can ask staff to learn the CLI and give them gift cards for what it would have cost to license the Docker Desktop client 🧠

    • thorisalaptop@lemmy.world
      link
      fedilink
      English
      arrow-up
      99
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Docker Engine (which is the core of what people think of as “Docker”) is FOSS. Docker Desktop (which most people rely on for local development) is free for individuals but I believe the license says companies over a certain size are required to pay.

      And on top of that the paid plans also come with support, which large businesses frequently require, and private repositories on docker’s image repository.

      • Cyborganism@lemmy.ca
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        1
        ·
        2 months ago

        This is the correct response.

        At my job we’ve been asked to remove Docker desktop unless it is absolutely necessary for a client project.

        I’ve just been using Docker through command line via WSL and that’s good enough for me.

        • kameecoding@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          2 months ago

          I don’t see any use for Docker Desktop, you can see the running containers in a gui instead of just typing docker ps in a terminal, damn what a fucking awesome and needed thing, it’s gonna totally come in handy when I do deployments through the terminal and I didn’t learn the commands

              • UnsavoryMollusk@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                1
                ·
                edit-2
                2 months ago

                In VSCodium you have the docker plugin. It pretty much offers the same capabilities as the Docker desktop (view containers, images, etc. Allow to connect to the containers, to see their files, etc).

        • thorisalaptop@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 months ago

          I think docker desktop’s bigger value prop is that it’s a well supported zero-effort setup of a VM to run the docker daemon on platforms that don’t support it natively (i.e. MacOS which a lot of programmers use). And it very cleanly handles mounting your local filesystem into containers running in the VM, which is important for dev envs and used to be a source of friction with alternatives (although it seems like the competition has caught up and this also now works out of the box with rancher desktop and others?). Having a GUI is somewhere behind those, though I know folks who have a weird preference for GUIs 🤷‍♀️.

          I’m just a guy who uses Linux and spends most of his time in a terminal, so I’m not saying I value docker desktop, and I personally don’t have to deal with any of this so I’m probably behind on how good the alternatives are. Just saying where I see other people get use out of it.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          We use it, and I honestly don’t see much value. I use 90% CLI, but occasionally it’s nice. I use macOS at work, so it’s nice to be able to see how much space the VM is using. Also, searching through logs is a little nicer through the GUI than the CLI.

          I actively avoid the GUI at home because, even on Linux, it’ll spin up a VM to host your containers, whereas if you stick with the CLI, there’s no VM, which solves soooo many headaches.

      • magic_smoke@links.hackliberty.org
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        2 months ago

        Glad I run everything in a VM. If you want my money you can accept donations, and sell support contracts.

        The moment you hide features or code behind a paywall or proprietary license, is the moment you no longer get my fucking money.

        Granted random weirdos who donate to FLOSS projects probably weren’t paying dockers bill anywho.

    • AreaKode@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 months ago

      Support. If your a business, you pay to keep uptime high. This is unnecessary for most people.

        • rombert@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 months ago

          Yes, in the sense that if you are a free user or unauthenticated and pull too often (including checking if a tag exists) you will get rate limited and have to wait or pay.

          • kobra@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            Can confirm. Spent a bunch of time a few weeks ago setting up ECR pull through cache in AWS to alleviate this very issue.

    • OpenPassageways@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 months ago

      I don’t think you even need Docker licenses to run Linux containers, but unfortunately I need to deal with this because I have some legacy software running in windows containers.

      • Shimitar@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        That’s not the point. Maybe you can, but for how long? you will never stop asking the question with docker…

    • Narwhalrus@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 months ago

      We’ve completely transitioned from docker to podman where I work. The only pain point was podman compose being immature compared to docker compose, but turns out you can run docker compose with podman using the podman socket easily.

      • Shimitar@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I think you wrote it back ways: transitioned from docker to podman?

        Yeah podman should use quadlets, not compose, but still works just fine with docker compose and the podman socket!

        • Narwhalrus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Oops. Thanks for the correction.

          I hadn’t heard of quadlets. I’ll have to give them a look.

      • gencha@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I gave podman compose a fresh try just the other day and was happy to see that it “just worked”.

        I’m personally pissed about aardvark-dns, which provides DNS for podman. The version that is still in Debian Stable sets a TTL of 24h on A record responses. This caused my entire service network to be disrupted whenever a pod restarted. The default behavior for similar resolvers is to set a TTL of 0. It’s like people who maintain it take it as an opportunity to rewrite existing solutions in Rust and implement all the bugs they can. Sometimes feels like someone just thought it would be a fun summer break project to implement DNS or network security.

  • gencha@lemm.ee
    link
    fedilink
    English
    arrow-up
    40
    ·
    2 months ago

    Their entire offering is such a joke. I’m forced to use Docker Desktop for work, as we’re on Windows. Every time that piece of shit gets updated, it’s more useless garbage. Endless security snake oil features. Their installer even messes with your WSL home directory. They literally fuck with your AWS and Azure credentials to make it more “convenient” for you to use their cloud integrations. When they implemented that, they just deleted my AWS profile from my home directory, because they felt it should instead be a symlink to my Windows home directory. These people are not to be trusted with elevated privileges on your system. They actively abuse the privilege.

    The only reason they exist is that they are holding the majority of images hostage on their registry. Their customers are similarly being held hostage, because they started to use Docker on Windows desktops and are now locked in. Nobody gives a shit about any of their benefits. Free technology and hosting was their setup, now they let everyone bleed who got caught. Prices will rise until they find their sweet spot. Thanks for the tech. Now die already.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      2 months ago

      I actually thought this headline was a joke (i.e. adding 80% of 0 to 0 equals 0), until I clicked the link to see that people actually pay for Docker? I guess this is for Enterprise?

      I have never really had much use for it, so never have installed it, but it seems like everyone here uses Docker, which is surprising given the cost and what you just said.

    • Evoliddaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      This speaks to my soul so much. I started at a non profit 2 years ago and it pains me how much the company spends on Oracle and docker now and no one does anything about it. So much of our infrastructure is built to rely on these things that we can’t just do without them when they do crazy shit like this. And Oracle and docker can afford to do this as long as a few cash cows hang on like us. Hostage is the worst and best description.

    • khorak@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I switched to running docker inside wsl2 (installed as per their docs) and so far it’s been working well.

      • gencha@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        It’s the way to go, but too difficult for most users in my experience. They rather just install Docker Desktop and use git bash. Sad reality

  • arthurpizza@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    2 months ago

    Hot take: Good for them.

    This will have zero impact on 99% of independent developers. Most small companies can move to an alternative or roll their own infrastructure. This will only really impact large corporations. I’m all for corporation-on-corporation violence. Let them fight.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      This is a different take on the VMscare broadcom purchase.

      The real losers here are SoHos where it is too pricy to migrate and also too pricy not to. I don’t know whether that’s in your 1% or 99% but:

      • devs don’t develop for infrastructure their customers don’t use. It’s as dead as LKC, then.
      • big customers have deprecated their VMware infra and are only spending on replacement products, and if they do the same for docker the company will suffer in a year.

      If docker doesn’t have the gov/mil revenue, are we prepared for the company shedding projects and people as it shrinks?

      Remember: when tech elephants fight, it’s we the grass who suffers.

    • withtheband@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      How is the transition from docker to podman? I’m using two compose scripts and like 10 containers each. And portainer to comfortably restart stuff on the fly

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        I can only provide my experience; it was a drop-in replacement. I have 7 services running and 3 db containers. I was able to migrate using the Podman official instructions without issue.

      • Grass@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        from what I can gather its currently recommended to use quadlets to generate systemd units to achieve what compose was doing. podman compose is a thing but IIRC I didn’t find that was straight drop in and I had to change the syntax or formatting a bit for it to work and from the brief testing I have put in quadlets seems less hassle, but if you use a non systemd distro then I don’t know.

      • mlg@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        I’d say about 99% is the same.

        Two notable things that were different were:

        • Podman config file is different which I needed to edit where containers are stored since I have a dedicated location I want to use
        • The preferred method for running Nvidia GPUs in containers is CDI, which imo is much more concise than Docker’s Nvidia GPU device setup.

        The second one is also documented on the CUDA Container Toolkit site, and very easy to edit a compose file to use CDI instead.

        There’s also some small differences here and there like podman asking for a preferred remote source instead of defaulting to dockerhub.

  • beerclue@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    2 months ago

    Are You guys really pulling more than 40 images per hour? Isn’t the free one enough?

    • pop@lemmy.ml
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      7
      ·
      2 months ago

      On Lemmy, it’s a sin to make money off your work, especially if it is opensource core projects providing paid infrastructure/support. You can only ask for donations and/or quit. No in-between.

    • gencha@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      2 months ago

      A single malfunctioning service that restarts in a loop can exhaust the limit near instantly. And now you can’t bring up any of your services, because you’re blocked.

      I’ve been there plenty of times. If you have to rely on docker.io, you better pay up. Running your own NexusRM or Harbor to proxy it can drastically improve your situation though.

      Docker is a pile of shit. Steer clear entirely of any of their offerings if possible.

      • beerclue@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        I use docker at home and at work, nexus at work too. I really don’t understand… even a malfunctioning service should not pull the image over and over, there should be a cache… It could be some fringe case, but I have never experienced it.

        • gencha@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 months ago

          Ultimately, it doesn’t matter what caused you to be blocked from Docker Hub due to rate-limiting. When you’re in that scenario, it’s most cost efficient to buy your way out.

          If you can’t even imagine what would lead up to such a situation, congratulations, because it really sucks.

          Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest “latest” tag. Every tag floats, not just “latest”. Lots of people don’t pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.

          Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can’t even get on the Internet, because your pihole doesn’t start. You can’t recover, because you’re rate-limited.

          I’ve been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.

          This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I’m not talking commercial.

    • Pieisawesome@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      One of the previous places I worked at had about a dozen outbound IP addresses (company VPN).

      We also had 10k developers who all used docker.

      We exhausted the rate limit constantly. They paid for an unlimited account and we just would queue an automation that would pull the image and mirror it into the local artifact repo

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    7
    ·
    2 months ago

    Is this the program that open source people use to install all the random depencies that their program needs to work? The one that people tell me to use when I complain about git bash pico sudo pytorch Install commands?

    Or did another company copy their name?

    • gsfraley@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      edit-2
      2 months ago

      I mean, they’re one implementor of about 10 that use the same container standards. It sucks that they were first so their name is now synonymous with containers a la Kleenex, but the technology itself is standard, very open and ubiquitous, and a huge step forward in simplifying deployments and development lifecycles that would otherwise be too complex to reasonably handle.

    • gencha@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Not having to install dependencies is a benefit of containers and their images. That’s a pretty big thing to miss. Maybe give it a closer look.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        But it does in a lot of cases. At work, we use Docker images to bundle our dependencies for each microservice, and at home, I use Docker images for the same reason on my self-hosted repos. It’s fantastic for running servers in a sandbox so you don’t have to worry about what dependencies the host has.

        But perhaps OP is talking about flatpaks instead.

  • Olgratin_Magmatoe@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    2 months ago

    At work we get around this by not having docker or anything similar set up in the first place.

    I’m getting tired of it lol