• 0 Posts
  • 54 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle



  • One issue I have with hexbear is that you can’t argue with its users on hexbear itself. Most comments from outsiders are deleted within a day, and most of the users aren’t interested in discussions and simply resort to name calling and personal attacks. The more “sophisticated” ones will tell you to “read theory”. The amount of hexbear users actually capable of producing arguments seems to be very low, at least from my experience.

    These issues exist on other instances as well of course, but on hexbear its particularly bad. The only other instances this toxic I have interacted with were lemmygrad and exploding-heads.


  • Lemvi@lemmy.sdf.orgtoMemes@lemmy.mlKnow the difference.
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    No, I just have very different ideas what progress is.

    Progress in my eyes is made when a society becomes more democratic, and when we solve conflicts without bloodshed.

    In that sense, sure, the GDR was a step in the right direction, but nazi germany didn’t exactly set the bar very high.

    The idea of socialism is nice, but you hardly have any progress if the system (be it built on free markets or planned economies) doesn’t work to improve ordinary citizens’ lives, but only to keep the powerful in power.

    Personaly, I don’t care much about free markets or planned economies. I think the best approach, as so often, is a kind of blend, a social market economy that allows independent companies in a framework that protects workers, consumers and the environment.

    Thing is, the specifics of the economic system aren’t important. What matters is that the people are the ones who decide them.

    There is nothing wrong with pursuing a utopian society, but ultimatly you have no control over what happens in the far future (neither should you, future societies need to be ruled by future people).

    The only thing you can control is the present and the near future, so what really matters aren’t the ends you strive for, but the means you employ while doing so.


  • Ah yes, my grandparents, the landlords. Wait hol’ up, they were working people, not landlords. GDR fucked them regardless.

    “bUt tHAT wASn’T rEaL ComMunIsM” If neither the USSR nor China could achieve true Communism, then maybe it isn’t so much a realistic goal as a utopian ideal, a convenient justification for all kinds of crimes against humanity that occur in its pursuit.




  • I don’t think he is proposing another dimension, but rather another scale. As you already said, we already filter the information that reaches us.

    He seems to take this idea of filtering/censorship to an extreme. Where I see filtering mostly as a matter of convenience, he portrays information as a threat that people need to be protected from. He implies that being presented with information that challenges your world view is something bad, and I disagree with that.

    I am not saying that filtering is bad. I too have blocked some communities here on Lemmy. I am saying that it is important not to put yourself in a bubble, where every opinion you see is one you agree with, and every news article confirms your beliefs.




  • Lemvi@lemmy.sdf.orgtoChat@beehaw.orgWhat does allyship mean?
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’d say an ally is someone you have an alliance with, so someone with who you have agreed to pursue a common goal. So yeah, I’d say if you are someone’s ally, they are also yours.

    That differs somewhat from how it’s used in the LGBT+ community, where it refers to non-LGBT+ supporters of LGBT+ rights.





  • Don’t try to point out why they are wrong, but rather why you believe what you believe. And when they tell you why they believe what they do, no matter how ridiculous it might seem, respect their opinion and explain which points exactly you disagree with and why.

    I think the idea of trying to convince the other is flawed in itself. It implies that you are right and they are wrong. Approach any conversation with that mindset, and neither their nor your opinion will change.

    Instead, try to see a discussion as a way to exchange perspectives with the goal of finding the truth. Only if you are open to the idea of changing your mind can you hope to change that of your conversation partner.



  • Ok, maybe I didn’t make my point clear: Yes they can produce a text in which they reason. However, that reasoning mimics the reasoning found in the training data. The arguments a LLM makes and the stance it takes will always reflect its training data. It cannot reason counter to that.

    Train a LLM on a bunch of english documents and it will suggest nuking Russia. Train it on a bunch of Russian documents and it will suggest nuking the West. In both cases it has learned to “reason”, but it can only reason within the framework it has learned.

    Now if you want to find a solution for world peace, I’m not saying that AI can’t do that. I am saying that LLMs can’t. They don’t solve problems, they model language.


  • LLMs are trained to see parts of a document and reproduce the other parts, that’s why they are called “language models”.

    For example, they might learn that the words “strawberries are” are often followed by the words “delicious”, “red”, or “fruits”, but never by the words “airplanes”, “bottles” or “are”.

    Likewise, they learn to mimic reasoning contained in their training data. They learn the words and structures involved in an argument, but they also learn the conclusions they should arrive at. If the training dataset consists of 80 documents arguing for something, and 20 arguing against it (assuming nothing else differentiates those documents (like length etc.)), the LLM will adopt the standpoint of the 80 documents, and argue for that thing. If those 80 documents contain flawed logic, so will the LLM’s reasoning.

    Of course, you could train a LLM on a carefully curated selection of only documents without any logical fallacies. Perhaps, such a model might be capable of actual logical reasoning (though it would still be biased by the conclusions contained in the training dataset)

    But to train an LLM you need vasts amount of data. Filtering out documents containing flawed logic does not only require a lot of effort, it also reduces the size of the training dataset.

    Of course, that is exactly what the big companies are currently researching and I am confident that LLMs will only get better over time, but the LLMs of today are trained on large datasets rather than perfect ones, and their architecture and training prioritize language modelling, not logical reasoning.


  • It should be mentioned that those are language models trained on all kinds of text, not military specialists. They string together sentences that are plausible based on the input they get, they do not reason. These models mirror the opinions most commonly found in their training datasets. The issue is not that AI wants war, but rather that humans do, or at least the majority of the training dataset’s authors do.