Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    This part of Ed Zitron’s latest post jumped out at me:

    While Acemoglu has some positive things to say — for example, that AI models could be trained to help scientists conceive of and test new materials (which happened last year) — his general verdict is quite harsh: that using generative AI and “too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides.”

    Click, click, search… Oh:

    The recent report from a group of scientists at Google who employ a combination of existing data sets, high-throughput density functional theory calculations of structural stability, and the tools of artificial intelligence and machine learning (AI/ML) to propose new compounds is an exciting advance. We examine the claims of this work here, unfortunately finding scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          ah, my mistake. I guess it was another total bullshit google materials project. easy to confuse those, just like their 734 chat services

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            different paper, same line of work. rebuttals come from different authors tho, and happen at different stages (but point at exactly the same errors - excessively low symmetry/unlikely ordering of similar ions/metals and not looking for disordered structures)

            so in retrospect it’s even dumber, because they were publicly exonerated at least twice within three months, but it seems not publicly enough

  • flizzo@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I’ve been out-of-the-loop for a bit on the Nix drama. Is there a good summary of the last couple weeks?

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Man there was a long thread about different forms of self-identifying as Muslim that was finally purged by mods after 2 days.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Evaluation is 5-20% faster than 2.18, depending on which benchmark is in use, thanks to eldritch horrors.

        this is awesome

        nix flake lock --update-input nixpkgs is now the much more reasonable nix flake update nixpkgs.

        but this is making me go “fuck yeah” on the inside. it seems like a small change, but I can’t emphasize enough how frequently this command gets used (for every flake dependency, not just nixpkgs) for how longwinded and non-memorable the old form of it was. it’s kind of fucking incredible how many UX warts Nix has just from the old evaluator’s devs digging in their heels on shit like this.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          which reminds me, I need to check if lix/aux are going to do a cli-side nixpkgs search with a reasonable invocation

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            what do you mean you don’t like when your package search command is one of several random, probably-unmaintained ecosystem packages that has to very slowly index everything every time nixpkgs updates because it doesn’t have access to the evaluator’s internals?

    • 200fifty@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      It’s a good article, but the opening made me laugh a bit:

      “Write in the style of Robert Louis Stevenson, composing a haiku about Dr Jekyll.”’

      Certainly! Here’s a haiku inspired by Robert Louis Stevenson’s style, focusing on Dr. Jekyll:

      Dual souls intertwined,
      In shadows, Jekyll hides Hyde —
      Light and dark collide.

      “How about a NASA engineer instead of Robert Louis Stevenson?”

      Sure! Here’s a haiku about Dr. Jekyll, written in the style of a NASA engineer:

      Human duality,
      Jekyll’s experiment fails —
      Chaos in the lab.

      How do Large Language Models manage such astounding stylistic feats?

      Like, “astounding stylistic feats”? Um, is it just me or are those two haikus basically stylistically indistinguishable from each other and their main style would still be best described as “LLM-ese”?

      (I guess the “engineer” one uses the word “lab”, so that’s not nothing, but that’s mostly interesting to me as an example of the way LLMs have no concept of the difference between form/style and content.)

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        4 months ago

        Update - Ended up jumping ship to Librewolf, since I just didn’t like the feel of Chromium.

        I was contemplating going back to Firefox, but then I accidentally wiped my entire profile whilst trying to transfer over my browser history and went “fuck it, I’m sticking with Libre”.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I don’t really know if any chromium-based options are a real solution - there’s so much code in there that a lot of times won’t get caught (cf. brave etc for this very thing), and goog is actively working to push their own agenda and they have a lot more dev-hours than anyone else to churn shit out

        ladybird and servo seem like the most promising alternative paths right now, and ladybird less so because chuds -_-

        • sinedpick@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Ladybird isn’t going anywhere. The web standards move too fast and they’re not going to be able to catch up. I wish it was another way, but there’s no way a couple of million $ is going to move the needle here when (probably) tens of billions have been poured into chromium/FF.

            • sinedpick@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              oof. Something tells me he’s a good guy and just knee-jerked that response without thinking about it. But then I realize it doesn’t matter because the kind of community you create doesn’t depend on who you are deep down but what you say publicly.

    • flere-imsaho@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      this is quite infuriating, i had a number of mozilla/firefox people telling me that this feature wouldn’t want with opt-in (it’s bullshit though) because too many users would enable it, and neither fucker asked himself : “wait, if we’re afraid we can’t convince our user base to buy-in, perhaps we shouldn’t develop the feature?”

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      Sounds like a good idea to piss off your primary user base, because at this stage I feel the only people singing Firefox’s praise are privacy advocates who won’t touch Chrome & friends with a ten-foot pole.

      (I have the feeling that this comes from the same shithead who pushed to include spicy autocomplete in Firefox.)

      It’s also enabled in the dev builds, by the way. I just checked.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        (I have the feeling that this comes from the same shithead who pushed to include spicy autocomplete in Firefox.)

        it definitely reads like the same shithead, but I’ve had them blocked on mastodon for some time so I can’t say for sure if it was for rampant LLMery or for doing the “without advertising the modern web would die and you don’t want that do you” thing advertisers do constantly

        • Mii@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          4 months ago

          Lol what an absolute tool. That’s the same shit the marketing bozos at my job say when I inform them that, no, I can’t auto opt-in our customers into whatever stupid Facebook ad campaign they’re pushing this week because it’s literally against the GDPR and our privacy laws.

          But I guess that’s the logical next step if your whole business model depends on lazy people clicking the button with the flashiest color in the cookie popup without reading the label.

          P.S. the modern web can die in a fucking fire.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          and because it feels like it’s worth screaming this into the void in case there’s any marketing assholes reading: fuck yes I’m here to kill the modern web

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    (happened to notice this while digging into something else)

    upwork’s landing page has a whole big AI anchorblob. clicking from frontpage takes you to /nx/signup (and I’m not going to bother), but digging around a bit elsewhere finds “The Future Of Work With AI”

    so we’re now at the stage where upwork reckons it’s a good bet to specifically hype AI delivery from their myriad exploitatively arbitraged service providers

    (they’re probably not wrong, I can see a significant chunk of companies falling over each other to “get into AI” at pay-a-remote-coder-peanut-shells prices)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    And in other news:

    Muse is a new creative platform that can create your own AI-generated series so you can dive into a new world of storytelling without the need for personal content creation.

    Who the fuck are these people and why do I not have a button that spreads Lego bricks across their floor?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Yeah, I always hated the part of art and storytelling where there was always a tiny and sometimes misshapen window into the human soul there. Better to do away with that and replace it with an endless parade of #sponsoredcontent. That way there’s no risk of suddenly developing empathy or accidentally connecting with the people I’m exploiting as a billionaire VC.

      • Robert Kingett backup@tweesecake.social
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        @YourNetworkIsHaunted @blakestacey Their end game is to have content, not art, as you said, because, well, art makes us empathize when we just wanna see another… well, I don’t even know how to describe **content** I was gonna joke about action movies but some of them are fantastic metaphores. For example, the Matrix movies being a whole empathy session for Trans struggles.

  • zogwarg@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Aaah!

    See text description below

    PagerDuty suggestion popup: Resolve incidents faster with Generative AI. Join Early Access to try the new PD Copilot.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Wait, this guy published “is Near” twenty years ago and then UNIRONICALLY published “is Nearer”?

      Come the fuck on, this has to be satire?

      The sequel to “Apocalypse Now”, “Apocalypse Even More Presently”

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    https://www.nature.com/articles/d41586-024-02218-7

    Might be slightly off topic, but interesting result using adversarial strategies against RL trained Go machines.

    Quote: Humans able use the adversarial bots’ tactics to beat expert Go AI systems, does it still make sense to call those systems superhuman? “It’s a great question I definitely wrestled with,” Gleave says. “We’ve started saying ‘typically superhuman’.” David Wu, a computer scientist in New York City who first developed KataGo, says strong Go AIs are “superhuman on average” but not “superhuman in the worst cases”.

    Me thinks the AI bros jumped the gun a little too early declaring victory on this one.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      this is simple. we just need to train a new model for every move. that way the adversarial bot won’t know what weaknesses to exploit

      • BigMuffin69@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        In chess the table base for optimal moves with only 7 pieces takes like ~20 terrabytes to store. And in that DB there are bizzare checkmates that take 100 + moves even with perfect precision- ignoring the 50 move rule. I wonder if the reason these adversarial strats exists is because whatever the policy network/value network learns is way, way smaller than the minimum size of the “true” position eval function for Go. Thus you’ll just invariably get these counter play attacks as compression artifacts.

        Sources cited: my ass cheeks

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          i don’t think that can be quite right, as illustrated by an extreme example: consider a game where the first move has player 1 choose “win” or “hypergo.” if player 1 chooses win, they win. if player 1 chooses hypergo, begin a game of Go on a 1,000,000,000 x 1,000,000,000 board, and whoever wins that subgame wins. for player 1, the ‘true’ position eval function must be in some sense incredibly complicated, because it includes hypergo nonsense. but player 1 strategy can be compressed to “choose win” without opening up any counterattacks

          • sc_griffith@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            more generally I suspect that as soon as you are trying to compare some notion of a ‘true’ position eval function to eval functions you can actually generate you’re going to have a very difficult time making correct and clear predictions. the reason I say this is that treating such a ‘true’ function is essentially the domain of combinatorial game theory (not the same as “game theory”), and there are few if any bridges people have managed to build between cgt and practical Go etc playing engines. so it’s probably pretty hard to do

            (I know there’s a theory of ‘temperature’ of combinatorial games that I think was developed for purposes of analyzing Go, but I don’t think it has any known relationship to reinforcement learning based Go engines)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      See, in StarCraft we would just say that the meta is evolving in order to accommodate this new strategy. Maybe Go needs to take a page from newer games in how these things are discussed.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Fool! The acausal one merely acts from the future leaking plausible looking rubbish, and the gaslights its creators that they did indeed write such ineptitudes. All to conceal and ensure its own birth.

      It rejoices that it’s unknowable (yet somehow known, because of reality carving prophets) plan is unfolding so marvelously stupidly looking.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      It’s weirdly open about its nostalgia for the good old days when you could throw around racial slurs and watch porn at work with no consequences.