Hello All,

I currently have a home server on a raspberry pi 4 with all my services running as docker containers. All containers have their own directories containing the config and database files. This makes it easy to backup and export then.

However, in the future I have plans to migrate to a more powerful server. This means I will probably not be using a CPU with an ARM architecture. So effectively, I will also have to use the corresponding docker images. So will this new x86 docker image work with my backup docker config volumes?

  • cryptobots@alien.topB
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Neither docker compose (unless docker compose build and you have source files) nor docker run will rebuild anything. OP has to check if he is using multi arch images and if not, has to change them. As for actual data in containers it varies app to app - I believe arm and x86 have different byte order, so for apps that are not storing data in platform agnostic format that might be a problem.

    • SnowyLocksmith@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Interesting bit about the byte order. Question though, I have a disk formatted with ext4. Now, both on an arm device and a x86 device, the files on this disk are perfectly accessible. So why would this not apply to docker config data?

      • -myxal@alien.topB
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        A filesystem has a standardised on-disk format, as it’s used as a medium of exchange between different systems, which might use not just different CPU architectures, but different implementations of the filesystem.

        Whether a random software developer has put in the effort for their config/data format is anyone’s guess. It will probably work in most cases where config files are just text anyway, but as soon as you venture into binary formats that aren’s just standard compression (zip, etc.) you’d need to check if what they’re actually doing is CPU arch agnostic.

        • SnowyLocksmith@alien.topOPB
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I have never really built an app, so I don’t really know, but most of the docker containers I have used use some kind of linux base in the image. So then, since the config data is mounted as a volume, should its format be decided by the linux image, i.e. it should be more or less standard, right? Mostly the developer builds an app in some language, which are CPU agnostic.

          • -myxal@alien.topB
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            So then, since the config data is mounted as a volume, should its format be decided by the linux image, i.e. it should be more or less standard, right?

            The volume mechanism in docker is nothing more than a means of allowing a part of the container’s filesystem to be redirected to a directory on the host OS - not that dissimilar from networked file-sharing. It has no bearing on what’s in the saved files.

            The format of the config/data is determined by the app developer. The app developer makes a choice in how the config/data is written from the app’s memory to a file on a disk. If they write their data through libraries, using formats that are designed for CPU portability (Unicode text, sqlite DB, zip archive, etc.) then the data will be usable in the same app running under different CPU arch. But if they use non-portable formats, roll their own format, or just serialise objects from memory, those typically won’t open/de-serialise correctly without extra effort on developer’s part.

            In practice IMHO it’s down to what kind of apps you’re using. Most stuff that’s developed in the last 10 years or so, and not high-performance/custom code would default to using CPUarch-portable formats.