• 1 Post
  • 63 Comments
Joined 11 months ago
cake
Cake day: August 2nd, 2023

help-circle
  • It can’t be that hard to make a chatbot that can take instructions like “identify any unsafe outcomes from following this advice”

    It certainly seems like it should be easy to do. Try an example. How would you go about defining safe vs unsafe outcomes for knife handling? Since we can’t guess what the user will ask about ahead of time, the definition needs to apply in all situations that involve knives; eating, cooking, wood carving, box cutting, self defense, surgery, juggling, and any number of activities that I may not have though about yet.

    Since we don’t know who will ask about it we also need to be correct for every type of user. The instructions should be safe for toddlers, adults, the elderly, knife experts, people who have never held a knife before. We also need to consider every type of knife. Folding knives, serrated knives, sharp knives, dull knives, long, short, etc.

    When we try those sort of safety rules with humans (eg many venues have a sign that instructs people to “be kind” or “don’t be stupid”) they mostly work until we inevitably run into the people who argue about what that means.


  • A bunch of scientific papers are probably better data than a bunch of Reddit posts and it’s still not good enough.

    Consider the task we’re asking the AI to do. If you want a human to be able to correctly answer questions across a wide array of scientific fields you can’t just hand them all the science papers and expect them to be able to understand it. Even if we restrict it to a single narrow field of research we expect that person to have a insane levels of education. We’re talking 12 years of primary education, 4 years as an undergraduate and 4 more years doing their PhD, and that’s at the low end. During all that time the human is constantly ingesting data through their senses and they’re getting constant training in the form of feedback.

    All the scientific papers in the world don’t even come close to an education like that, when it comes to data quality.


  • Haha. Not specifically.

    It’s more a comment on how hard it is to separate truth from fiction. Adding glue to pizza is obviously dumb to any normal human. Sometimes the obviously dumb answer is actually the correct one though. Semmelweis’s contemporaries lambasted him for his stupid and obviously nonsensical claims about doctors contaminating pregnant women with “cadaveric particles” after performing autopsies.

    Those were experts in the field and they were unable to guess the correctness of the claim. Why would we expect normal people or AIs to do better?

    There may be a time when we can reasonably have such an expectation. I don’t think it will happen before we can give AIs training that’s as good as, or better, than what we give the most educated humans. Reading all of Reddit, doesn’t even come close to that.




  • If they actually did this correctly, it would be great. Whether or not it’s possible, or even desirable to eliminate all hate speech, it should be possible to minimize the harms.

    When somebody mutters some hateful comment to themselves, do we care? Not really. We care that the hateful comment gets repeated and amplified. We care that someone might take that hateful comment. We care that someone might take harmful actions based on the comment.

    If those algorithms successfully let these comments die in ignominy they’ve done their job. My fear is that they won’t really do this though. Instead they’ll mostly hide these comments and “accidentally” let them slip out when the company thinks they need an “engagement” boost.


  • There are many subcultures around food. It’s not like the world is split between vegans and junk food addicts.

    The Cheeto and McDonalds eating crowd may have crappy nutrition but they’re an extreme. The other extreme is meal-preppers. They know exactly how much chicken, rice and broccoli they’re eating.

    There are huge communities of people who are very health conscious. Some of them focus their consciousness on science, some of them on other methods. Some of those people are vegans. Some aren’t.


  • I thought credit for both of those usually goes to unions. Which anarchists or groups of anarchists made the most significant contributions to the 40 hour work week or 8 hour days?

    How did a philosophy of minimized government involvement contribute to the regulations and enforcement mechanisms around our labor laws?






  • I’m going to try to paraphrase that position to make sure I understand it. Please correct me if I got it wrong.

    AI produces something not-actual-art. Some people want stuff that’s not-actual-art. Before AI they had no choice but to pay a premium to a talented artist even though they didn’t actually need it. Now they can get what they actually need but we should remove that so they have to continue paying artists because we had been paying artists for this in the past?

    Is that correct or did I miss or mangle something?



  • nednobbins@lemm.eetoScience Memes@mander.xyzpuns
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    People are really awful at naming things.

    Some German nerd thought it was cool while they discovered some new receptor so they called it “toll” (German for cool/awesome). Computer science is full of names that are kind of funny if you already know the particular area but are total gibberish if you’re trying learn it. We’re not even good at naming humans. The default is to either pick one of the names that’s common in your culture. When people deviate from that you get a huge number of “special” names.

    We need to put this in the hands of experts. I’m gonna propose a new field, “nameology”. Those folks will do a bunch of research into names that make sense. How do we best name things so they completely and unambiguously label them in a way that’s easy to remember and use? Then they can run around and give non stupid names to all the things.