As suggested at this thread to general “yeah sounds cool”. Let’s see if this goes anywhere.

Original inspiration:

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

If your sneer seems higher quality than you thought, feel free to make it a post, there’s no quota here

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I grabbed a book on the fermi paradox from the university library and it turned out to be full of Bolstrom and Sandberg x-risk stuff. I can’t even enjoy nerd things anymore.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      it’s the actual fucking worst when the topics you’re researching get popular in TESCREAL circles, because all of the accessible sources past that point have a chance of being cult nonsense that wastes your time

      I’ve been designing some hardware that speaks lambda calculus as a hobby project, and it’s frustrating when a lot of the research I’m reading for this is either thinly-veiled cult shit, a grift for grant dollars, or (most often) both. I’ve had to develop a mental filter to stop wasting my time on nonsensical sources:

      • do they make weird claims about Kolmogorov complexity? if so, they’ve been ingesting Ilya’s nonsense about LLMs being Kolmogorov complexity reducers and they’re trying to use a low Kolmogorov complexity lambda calculus representation to implement their machine god. discard this source.
      • do they cite a bunch of AI researchers, either modern or pre-winter? lambda calculus, lisp, and functional programming in general have a long history of being treated as the magic that’ll enable the machine god by AI researchers, and this is the exact low quality shit research that led to the AI winter in the first place. discard this source.
      • at any point do they casually claim that the Church-Turing correspondence has been disproven or that a lambda calculus machine is superturing? throw that crank shit in the trash where it belongs.

      I think the worst part is having to emphasize that I’m not with these cult assholes when I occasionally talk about my hobby work — I’m not in it to make the revolutionary machine that’ll destroy the Turing orthodoxy or implement anyone’s machine god. what I’m making most likely won’t even be efficient for basic algorithms. the reason why I’m drawn to this work is because it’s fun to implement a machine whose language is a representation of pure math (that can easily be built up into an ML-like assembly language with not much tooling), and I really like how that representation lends itself to an HDL implementation.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        doing work that’s not trying to free us from the tyranny of century-old mathematical formulations? how dare you! burn the witch!

        (/s, of course! also your hardware calculus project sounds like a nicer time than my batshit idea (I want to make a fluidic processor… someday…))

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I want to make a fluidic processor… someday…

          fuck yeah! this sounds like the kind of thing that’d be incredibly beautiful if done on the macro scale (if that’s possible) — I love computational art projects that clearly show their mechanism of action. it’s unfortunate that a majority of hardware designers have a “what’s the point of this, it’s not generating value for shareholders” attitude, because that’s the point! I will make a uniquely beautiful computing machine and it won’t have any residual value any capitalist assholes can extract other than the beauty!

          if I ever finish this thing, I should make a coprocessor that can trace its closure lists live as it reduces lambda calculus terms and render them as fractal art to a screen. I think that’d be fun to watch.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            Yep. I love beautiful machines with beautiful action in the same way.

            One of my favourites I’ve seen was a clock with a tilt table, switchback running tracks running widthwise across that table, and switches by the track ends. A small ball would run across the track for 60s until it hits the switch, which would cause a lever system to flip the orientation of the tilt table (starting the ball movement the other way).

            Saw it in the one London collection of typically-stolen antiquities, I don’t recall the origin of it.

            For the processor: yep, something larger is the intent, but I think I’d have to start with a model scale first just to suss out some practical problems. And then when scaling it, other problems. God knows if I’d want to make this “you can walk in it” scale, but I’ll see 😅

      • Deborah@hachyderm.io
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        It’s like the filter I have to add to any research about anything with an overlap on healthy food (90% of it is new age grifters, manosphere or wooanon, or fatphobia & ableism); gardening, especially native plant gardening (a substantial minority is the intersection of woo-woo and NIMBYs); or martial arts (either manosphere or wooanon, depending on the gender breakdown of the martial art). So much crank.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          oh absolutely! I get too much exposure to the crank side of all of those topics from my family, so I can definitely relate. now I’m flashing back to the last couple of times my mom learned the artificial sweetener I use is killing me (from the same discredited source every time; they make the “discovery” that a new artificial sweetener causes cancer every few years) and came over specifically to try and convince me to throw out the whole bag

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              that too! processed sugar was the devil too, as if granulizing cane sugar imbued it with the essence of evil. she also claimed they used bleach to make white refined sugar? I think the end goal was to get me to reject the idea of flavor. joke’s on that lady, my cooking is both much better than hers and absolutely terrible for you

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Oh boy, I have thoughts about Kolmogorov complexity. I might actually write a section in my textbook-in-progress to explain why it can’t do what LessWrongers want it to.

        A silly thought I had the other day: If you allow your Universal Turing Machine to have enough states, you could totally set it up so that if the first symbol it reads is “0”, it outputs the full text of The Master and Margarita in UNICODE, whereas if it reads “1”, it goes on to read the tuples specifying another TM and operates as usual. More generally, you could take any 2^N - 1 arbitrarily long strings, assign each one an N-bit abbreviation, and have the UTM spit out the string with the given abbreviation if the first N bits on the tape are not all zeros.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          You could use the recent-ish Junferno video about Turing machines to demonstrate that point as well

  • Steve@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I found this article last week about AI bullshit written in 1985 by Tom Athanasiou and published in the, also new to me, Processed World zine.

    The world of artificial intelligence can be divided up a lot of different ways, but the most obvious split is between researchers interested in being god and researchers interested in being rich. The members of the first group, the AI "scientists,‘’ lend the discipline its special charm. They want to study intelligence, both human and "pure’’ by simulating it on machines. But it’s the ethos of the second group, the "engineers,‘’ that dominates today’s AI establishment. It’s their accomplishments that have allowed AI to shed its reputation as a "scientific con game’’ (Business Week) and to become as it was recently described in Fortune magazine, the "biggest technology craze since genetic engineering.‘’

    The engineers like to bask in the reflected glory of the AI scientists, but they tend to be practical men, well-schooled in the priorities of economic society. They too worship at the church of machine intelligence, but only on Sundays. During the week, they work the rich lodes of "expert systems’’ technology, building systems without claims to consciousness, but able to simulate human skills in economically significant, knowledge-based occupations (The AI market is now expected to reach $2.8 billion by 1990. AI stocks are growing at an annual rate of 30@5).

    https://processedworld.com/Issues/issue13/i13mindgames.htm

    All Processed World issues on archive.org https://archive.org/search?query=creator%3A"Processed+World+Collective"&and[]=mediatype%3A"texts"

    and the official Processed World site with html archive https://processedworld.com

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Unsure if this meets even the lowest bar for this thread but I was jumpscared by Aella while browsing Reddit

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Jesus wept, that one deserves a thread of its own. I can’t remember the last time I winced this hard.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Raise the Sanity Waterline

      We all wish our friends would be more rational, especially when they disagree with us. But actually helping them can be difficult, especially when already in an argument. Rationality Cardinality will help you teach your friends how to think more clearly, by introducing them to concepts in a fun and memorable way.

      Well, it will make your friends more Rationalist but not in the way they hope.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      imagine someone pulls this out and you have no idea what it is. you’re kind of nervous and weirded out by the energy at this orgy but at least this will distract you. you look at your first card and it has a yudkowsky quote on it

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        one of the data fluffers solemnly logs me as “did not finish” as I flee the orgy

    • earthquake@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Incredible, they just use the limerick that appears in the “Exaggeration and distortion of mental changes” section of Phineas Gage’s wikipedia article, uncritically.

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    My first mini sneer. Hope it fits the criteria.

    I frequent some (very AI-critical) art spaces, and every now and then we get some trolls who act like literal anime villains, complete with evil plans and revenge plots, but unfortunately without cool villain laughs.

    I always wonder if those bozos all were stuffed into a trashcan by a gang of delinquent artists in high school, judging from the absolute hate-boner they seem to have.

    I’ve deliberately not been talking about it online to aid in keeping it from their knowledge as long as possible.

    Not sure if he knows that not all artists live in caves and make cave paintings. And even those who do probably have a smart phone with them, for better or worse. So I’m afraid his nefarious plan doesn’t quite work out.

    I knew it would scare the anti-Al shitless, because it completely bypasses scraping, datasets […].

    Shaking in my chair over here, but I still don’t understand how this negates the needs for scraping and datasets. Just because I can attach a reference image to my prompt doesn’t mean the waifu generator can suddenly operate without training data.

    I foresee a full-on tantrum when this becomes commonly known.

    I mean, it’s not like Midjourney put out a big-ass announcement for that feature or anything. It’s totally a secret that only an elite circle knows about.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      this is just an increasingly desperate Seto Kaiba taking to the internet because yu-gi-boy pointed out his AI-generated Duel Monsters deck does not have the heart of the cards, mostly because the LLM doesn’t understand probability, but he’s in too deep with the Kaiba Corp board to admit it

  • slopjockey@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I just found out that there are Dominican Republic supremacists? Like, the latest thing on xitter is making the DR out to be Caucasian Haiti. It’s some especial pol-brained nonsense about how the DR is successful because it’s a white country, even though they’re all very clearly AT LEAST lightskinned? It’s an arguement about a country that only works if you’ve never seen the country or its people.

    This one dude in particular namesearchs Haiti and spams the replies with as many white(ish) Dominicans as he can, along with the typical rants and graphs and then if he gets dunked in the quotes or the replies he just ritual posts the same 5 tiktoks of the same lightskinned Dominicans and says that all the darkskins are just Haitians. The worst part is that it works. His posting completely smothers any tweet disagreeing because he’s paying Elon 58 dominican pesos a month to LARP as measurehead on pay to win 4chan.

      • Deborah@hachyderm.io
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Literally paid the price. In the sense that France charged Haitians cash for their own bodies as formerly enslaved people, the US supported France in this, Citibank eventually bought the debt, and Haiti spent 122 years buying themselves free. The most obvious case for reparations one could make and France DGAF and neither does Citi.

        (I assume the podcast goes into all of this but clarifying for anyone else who does a drive-by. Usually “paid the price“ is a metaphor. Not here.)

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      This feels shorter than these things usually are, or I’ve just become inured to this stuff.

      Of course it turned out to be about not taking AI doom seriously, and not about, say, EA collectively falling for a scammer.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Monologue, 10 second read: “yeah dawg so extrapolating from data seems intuitive, but data alone is not enough to make accurate or convincing predictions.”

    • titotal@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Good to see the Yud tradition of ridiculous strawmanning of science continue.

      In this case, the strawscientist falls for a ponzi scheme because “it always outputted the same returns”. So scientific!

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Ponzi Schemer: “Ignore all these elaborate, abstract, theoretical predictions. Empirically, everyone who’s invested in Bernie Bankman has received back 144% of what they invested two years later.”

        LessWronger: “Your object-level error is that you have committed the trend projection fallacy instead of using the universal prior and Jaynes-Solomonoff inversion, as HPMoR explained using the analogy of the inter-magic-national goblin banking system…”

        Scientist: “I fucked your mom”

    • elmtonic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      #3 is “Write with AI: The leading paid newsletter on how to turn ChatGPT and other AI platforms into your own personal Digital Writing Assistant.”

      and #12 is “RichardGage911: timely & crucial explosive 9/11 WTC evidence & educational info”

      Congratulations to Aella for reaching the top of the bottom. Also random side thought, why do guys still simp in her replies? Why didn’t they just sign up for her birthday gangbang?

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      In February of 2021 the far-right social media platform Gab experienced a data breach resulting in the exposure of more than 70 gigabytes of Gab data, including user registration emails and hashed passwords. Like many of those on the far-right, Red Panels had a presence on Gab, so we consulted the now-public data set from the Gab exposure. We learned that the “@redpanels” account had been registered with the email hgraebener@*****.com.

      womp womp

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Graebener was part of an Open iT delegation to Japan in May 2019 and appeared in photos of this on the Open iT LinkedIn page. […]. During the same time, StoneToss was eager to let his fans know that he had arrived in Japan, writing on Twitter, “Finally made it to the ethnostate, fellas.”

        Oh, so that’s why Japan is so damn popular on HN.