• eluvatar@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    I don’t get why they have so many generated files checked in. Like changing that seems like a no brainer. If they can be generated then just gitignore them and call it a day.

  • cbarrick@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    They talk about checking in generated files, but they also talk about using Bazel as the build system.

    They’re holding it wrong.

    Just define a BUILD target to generate the files. Don’t check them in. Any other target that depends on the generated files can depend on the target that generated them rather than depending on the files directly.

    My guess is that they haven’t fully embraced Bazel, so there must be parts of the CI/CD that are not defined as Bazel targets that also need these files…

    • steventrouble@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      The creator of Bazel–Google–also checks in their generated translation files. They don’t generate them on the fly. They use a caching fuse filesystem on top of perforce to make it efficient. Teams that use git within Google are encouraged to use many of the same tactics mentioned in this article.

    • lysdexic@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      They’re holding it wrong.

      That’s a naive take. These are no random autogenerated files. These are translation files. Even in the smoothest-running build systems and CICD pipelines, these can and often go wrong, because there is still an important human factor in generating translations. A regression hitting localization data means your whole system can become unusable for a whole portion of your userbase without having a good way to detect, track, and even monitor your apps.

      Checking these files into the build system is the only reliable way to track changes in translation and accessibility data, and pinpoint regressions.

      Source: I’ve worked for a company who had an internal translation service which by design required no human interaction and should only be integrated as a post-build step, and that system failed often and catastrophically. The only surefire way of tracking the mess it made was to commit those files and trwck changes per commit as part of pull requests.

  • UlrikHD@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    This honestly feels like it’s presented by someone with Stockholm syndrome. What major advantage is there over having multiple, more manageable repos? From the blog, it sounds like it’s just extra challenges and more complicated on-boarding.

    • sip@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      versioning and version dependencies are more manageable.

      idk why aren’t they using git clone --filter to clone a part of the repo and/or git sparse-checkout or at least git status . while in the subdir you are doing your work. what’s the point of doing git status on the whole thing if you’re working in a dir?

  • psukys@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    yeah feels like git’s being crammed as some kind of deployed prod tool and too afraid to move away from it

  • sabreW4K3@lemmy.tf
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    Sounds like the kinda stupid and time consuming that Mozilla is into