The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    117
    arrow-down
    13
    ·
    1 year ago

    The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.

    • GoodEye8@lemm.ee
      link
      fedilink
      English
      arrow-up
      55
      arrow-down
      5
      ·
      1 year ago

      To be fair, even if you understand the tech it’s kinda hard to see how it would benefit the average worker as opposed to CEOs and shareholders who will use it as a cost reduction method to make more money. Most of them will be laid off because of AI so obviously it’s of no benefit to them.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        Efficiency and productivity aren’t bad things. Nobody likes doing bullshit work.

        Unemployment may become a huge issue, but IMO the solution isn’t busy work. Or at least come up with more useful government jobs programs.

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          Of course, there’s nothing inherently wrong with using AI to get rid of bullshit work. The issue is who will benefit from using AI and it’s unlikely to be the people who currently do the bullshit work.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            1 year ago

            But that’s literally everything in a capitalist economy. Value collects to the capital. It has nothing to do with AI.

        • credit crazy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          You see the problem with that is how ai in the case of animation and art is how it’s not removing menial labor your removing hobbys that people get paid for taking part in

      • rambaroo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most of them? The vast majority of jobs cannot be replaced by LLMs. The CEOs who believe that are delusional.

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          You could cut the housing price to a tenth of what they currently are and it wouldn’t matter to the homeless people who don’t have a job. Things being cheaper don’t matter to people who can’t make a living.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.

    • meseek #2982@lemmy.ca
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      5
      ·
      1 year ago

      Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.

      But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

      Just one likely incident when big brother knows all and can connect the dots using raw compute power.

      Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.

      I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 year ago

        But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

        None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.

      • aidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        AI is not bots, most of that would be easier to do with traditional code rather than a deep learning model. But the reality is there is no incentive for these entities to cooperate with each other.

    • cybersandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      But our elected officials like McConnell, feinstein, Sanders, Romney, manchin, Blumenthal, Marley have us covered.

      They are up to speed on the times and know exactly what our generations challenges are. I trust them to put forward meaningful legislation that captures a nuanced understanding that will protect the interests of the American people while positioning the US as a world leader on these matters.

    • ZzyzxRoad@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.

      When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.

      Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.

    • bob_wiley@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Those who do know it have a strong bias toward new tech, which blinds them from reality or any possible negatives. We’ve see this countless times in tech. Like when NFTs were going to change the world, you couldn’t tell those guys otherwise without being branded out of touch or someone who doesn’t understanding the tech.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.

        • bob_wiley@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I have to believe the crypto bros understood it; they were just blinded my dollar signs… like much of those involved in AI right now.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wasn’t it the ones who didn’t understand NFTs who were the fan boys? Everyone who knew what they were said they were bloody stupid from the get-go.

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 year ago

      You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.

        And these people are making ‘observations’ without knowledge of even the external mechanisms.

        • archon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.

  • Uncle_Iroh@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    25
    ·
    1 year ago

    Most of the U.S. adults also don’t understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.

    • Mac@mander.xyz
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      1
      ·
      1 year ago

      “What do the opinions of people who are not in the slightest educated on the matter affect”

      Judging by the elected leaders of the USA: quite a lot, in fact.

      • Armen12@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        7
        ·
        1 year ago

        So you’d rather only the 1% get the right to vote? How about only white land owners? How about only men get to vote in this wonderful utopia of yours

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          1 year ago

          Stop stop there isn’t any straw left!

          • Armen12@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 year ago

            Making a mockery of the workforce who rely on jobs to not be homeless is not appropriate in this conversation, nor is it even an argument to begin with, it’s just a snobbish incel who probably lives in a gate community mocking poor people

      • Wolf_359@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        15
        ·
        1 year ago

        Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.

        No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don’t fight each other directly.

        Ukraine had nukes, gave them up, promptly invaded by Russia.

        Things that seem dangerous aren’t always dangerous. Things that seem safe aren’t always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.

        • Hexagon@feddit.it
          link
          fedilink
          English
          arrow-up
          33
          arrow-down
          2
          ·
          1 year ago

          Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:

          https://en.wikipedia.org/wiki/List_of_nuclear_close_calls

          We’ve just been lucky so far.

          And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.

          • gravitas_deficiency@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            1 year ago

            …and the development and use of nuclear power has been one of the most important developments in civil infrastructure in the last century.

            Nuclear isn’t categorically free from the potential to harm, but it can also do a whole hell of a lot for humanity if used the right way. We understand it enough to know how to use it carefully and safely in civil applications.

            We’ll probably get to the same place with ML… eventually. Right now, everyone’s just throwing tons of random problems at it to see what sticks, which is not what one could call responsible use - particularly when outputs are used in a widespread sense in production environments.

        • cheery_coffee@lemmy.ca
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          3
          ·
          1 year ago

          Alright, when the AI takes my job and I can’t feed my family while the billionaires add another digit to their net worth I’ll consider the pros.

          There’s about 0% chance we reform society for AI, it will just funnel more wealth to the rich. People claim it will open new jobs but I don’t see it.

          • Jerkface@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            7
            ·
            edit-2
            1 year ago

            People have had the same concerns about automation since basically forever. Automation isn’t the problem. The people who use automation to perpetuate the systems that work against us will continue to find creative ways to exploit us with or without AI. Those people and those systems-- they are the problem. And believe it or not, that problem is imminently solvable.

            • cheery_coffee@lemmy.ca
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              1
              ·
              1 year ago

              It’s fair to compare but you can’t dismiss concerns based on that.

              Past automation often removed duplicate or superfluous work type things, AI removes thought work. It’s a fundamentally different kind of automation than we’ve seen before.

              It will make many things cheaper to do and easier to start some businesses, but it will also decimate workers. It’s also not something that’s generally available to lower classes to wield yet.

              It’s here but I don’t have to be optimistic.

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                I fully agree with everything you said. My point is more that if we look at AI as the culprit, we’re missing the point. If I may examine the language you are using a bit-

                AI removes thought work.

                Employers are the agents. They remove thought work.

                it will also decimate workers.

                Employers will decimate workers.

                It would be smart to enact legislation that will mitigate the damage employers enabled by AI will do to wokers, but they will continue to exploit us regardless.

                Using language that makes AI the antagonist helps tyrants deflect their overwhelming share of the blame. The responsible parties are people, who can and should be held accountable.

                • cheery_coffee@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  I don’t think you’re wrong either, but at the same time it’s not feasible for everyone to be their own agent and it’s not feasible to say employers can’t use AI.

                  I don’t know what the solution is, but I’m prepping for a sudden career change in the next few years.

              • Jerkface@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                edit-2
                1 year ago

                I want to avoid using the term solution, not least of all because implementation has its own set of challenges, but some of us used to dream that automation would replace those jobs. Perhaps naively, some of us assumed that people just wouldn’t have to work as much. And perhaps I continue to be naive in thinking that that should still be our end goal. If automation reduces the required work hours by 20% with no reduction in profit, full time workers should have a 32 hour week with no reduction in income.

                But since employers will always pocket that money if given the option, we need more unionization, we need unions to fight for better contracts, we need legislation that will protect and facilitate them, and we need progressive taxation that will decouple workers most essential needs from their employers so they have more of a say in where and how they work, be that universal public services, minimum income guarantee, or what have you.

                We’re quite far behind in this fight but there has been some recent progress about which I am pretty optimistic.

                • Franzia@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  This was so very thoughtful, and after reading it, I feel optimistic too. Fuck yeah.

                  Edit: thank you.

          • PsychedSy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            5
            ·
            1 year ago

            Technology tends to drive costs down and create more jobs, but in different areas. It’s not like there hasn’t been capture by the super rich in the past 150 years, but somehow we still enjoy better lives decade by decade.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          That’s a good point, however just because the bad thing hasn’t happened yet, doesn’t mean it wont. Everything has pros and cons, it’s a matter of whether or not the pros outweigh the cons.

        • bogdugg@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I don’t disagree with your overall point, but as they say, anything that can happen, will happen. I don’t know when it will happen; tomorrow, 50 years, 1000 years… eventually nuclear weapons will be used in warfare again, and it will be a dark time.

        • Techmaster@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          No world wars have happened since the invention of nukes

          Except the current world war.

      • GigglyBobble@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        1 year ago

        You need to understand to correctly classify the danger though.

        Otherwise you make stupid decisions such as quiting nuclear energy in favor of coal because of an incident like Fukushima even though that incident just had a single casualty due to radiation.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        You chose an analogy with the most limited scope possible but sure I’ll go with it. To understand how dangerous an atomic bomb is exactly without just looking up a hiroshima you need to have atleast some knowledge on the subject, you’d also have to understand all the nuances etc. The thing about AI is that most people haven’t a clue what it is, how it works, what it can do. They just listen to the shit their telegram loving uncle spewed at the family gathering. A lot of people think AI is fucking sentient lmao.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          I don’t think most people think ai is sentient. In my experience the people that think that are the ones who think they’re the most educated saying stuff like “neural networks are basically the same as a human brain.”

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            You don’t think, yet a software engineer from google, Blake Lemoine, thought LaMDA was sentient. He took a lot of idiots down with him when he went public with said claims. Not to mention the movies that were made with the premise of sentient AI.

            Your anecdotal experience and your feelings don’t in the slightest affect the reality that there is tons of people who think AI is sentient and will somehow start some fucking robo revolution.

      • WhyIDie@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        7
        ·
        edit-2
        1 year ago

        you also don’t have to understand how 5g works to know it spreads covid /s

        point is, I don’t see how your analogy works beyond the limited scope of only things that result in an immediate loss of life

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          I don’t need to know the ins and outs of how the nazi regime operated to know it was bad for humanity. I don’t need to know how a vaccine works to know it’s probably good for me to get. I don’t need to know the ins and outs of personal data collection and exploitation to know it’s probably not good for society. There are lots of examples.

          • WhyIDie@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            okay, I’ll concede, my scope also was pretty limited. I still stand by not trusting the public with deciding what’s the best use of AI is, when most people think it’s anything more than statistics supercharged in its implementation.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I can certainly give that “you” don’t need to know but there are a lot of differing opinions on even the things you’re talking about inside of the people that are in this very community.

            I would say that the Royal we need to know because there are a lot of opinions on facts that don’t line up with actual facts for a lot of people. Sure, not you, not me but a hell of a lot of people.

            • walrusintraining@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              1 year ago

              I don’t disagree that people are stupid, but the majority of people got/supported the vaccine. Majority is sometimes a good indicator, that’s how democracy works. Again, it’s not perfect, but it’s not useless either.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      Well and being a snob about it doesn’t help. If all the average joe knows about AI is what google or openAI pushed to corporate media, that shouldn’t be where the conversation ends.

      • Uncle_Iroh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        The average joe can have their thoughts on it all they want, but their opinions on the matter aren’t really valid or of any importance. AI is best left to the people who have a deep knowledge of the subject, just as nuclear fusion is best left to scientists studying the field. I’m not going to tell average Joe the mechanic that I think the engine he just revised might just blow up, because I have no fucking clue about it. Sure I have some very basic knowledge of it, that’s pretty much where it end too though.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      You can not know the nuanced details of something and still be (rightly) sketched out by it.

      I know a decent amount about the technical implementation details, and that makes me trust its use in (what I perceive as) inappropriate contexts way less than the average layperson.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Because they live in the same society as you, and they get to decide who goes to jail as much as you do

    • Armen12@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      1 year ago

      What a terrible thing to say, they’re human beings so I hope they matter to you

          • Uncle_Iroh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            1 year ago

            I am a terrible person simply because they don’t matter to me? Do you cry for every death victim your military caused? Do you cry for every couple with a stillborn baby? No, you don’t. You think it’s shitty, because it is. But you don’t really care, they don’t truly matter to you. The way you throw those words around makes their meaning less.

            • Armen12@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Lot of words to just say you’re a terrible person, we got it already, you don’t need to explain why you’re terrible

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    1 year ago

    The problem is that there is no real discussion about what to do with AI.

    It’s being allowed to be developed without much of any restrictions and that’s what’s dangerous about it.

    Like how some places are starting to use AI to profile the public Minority Report style.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      Yep. It’s either “embrace the future, adapt or die” or “let’s put the technological genie back in the bottle”. No actual nuance.

      • PopOfAfrica@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 year ago

        The problem is capitalism puts us in this position. Nobody is abstractly upset the jobs we hate can now be automated.

        What is upsetting is that we wont be able to eat because of it.

  • Dasnap@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    3
    ·
    1 year ago

    The past decade has done an excellent job of making people cynical about any new technology. I find looking at what crypto bros are currently interested in as a good canary for what I should be suspicious of.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      4
      ·
      1 year ago

      The vaccine saved millions of lives, yet people will be cynical despite reality

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          If more of your family and friends are dying why would you avoid the ounce of prevention? That doesn’t make sense

          • GigglyBobble@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 year ago

            They wouldn’t attribute it to the virus but something like 5G radiation. And yes, it doesn’t make sense.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      It’s also worth noting that the same VCs who backed cryptocurrency have pivoted to generative AI. It’s all part of the same grift, just with different clothes.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most major companies didn’t touch crypto with a 10ft pole, but they’ve leapt at the chance to use AI tech. I don’t think it’s the same grift at all personally.

        • raktheundead@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          A lot of companies investigated cryptocurrency obliquely; “blockchain” was the hype word for several years in tech. And several of those companies had a serious sunk-cost fallacy going when they perpetuated their blockchain projects, despite blockchain only at best being a case of Worse Is Better, where a solution that sucks, but exists can be better than a perfect option that doesn’t.

    • Fermion@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I am really dissapointed that crypto became synonymous with speculative “investing.” The core blockchain technology seems like it could be useful for enhancing privacy online. However, the majority of groups loudly advertising that they use crypto are exploitative money grabs.

    • kitonthenet@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It doesn’t hurt that the same companies that did all the things that made people cynical about technologies are the ones perpetrating this round of BS

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    11
    ·
    1 year ago

    At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      1 year ago

      AI is such a huge term. Google lens is great, when I’m travelling I can take a picture of text and it will automatically get translated. Both of those are aided by machine learning models.

      Generative text and image models have proven to have more adverse affects on society.

      I think we’re at a point where we should start normalizing using more specific terminology. It’s like saying I hate machines, when you mean you hate cars, or refrigerators or air conditioners. It’s too broad of a term to be used most of the time.

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        Yeah, I think LLMs and AI art have overdominated the discourse to the degree that some people think they’re the only form of AI that exists, ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

        Some forms of AI are debatable of their value (especially in their current form). But there’s other types of AI that most people consider highly useful and I think we just forget about it because the controversial types are more memorable.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          AI is a tool, its value is dependent on whatever the application is. Transformer architectures can be used for generating text or music, but they were also originally developed for text translation which people have fewer qualms with.

        • SnipingNinja@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          ignoring things like text translation, the autocompletion of your phone keyboard, Photoshop intelligent eraser, etc.

          AFAIK two of those are generative AI based or as you said LLMs and AI art

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Its not a matter of slang, its referring to too broad of a thing. You don’t need to go as deep as the type of model, something like AI image generation, or generative language models is what you would refer to. We’ll hopefully start converging on shorthand from there for specific things.

        • nandeEbisu@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I’m kind of surprised people are more concerned with the output quality for chatGPT, and not where they source their training set from, like for image models.

          Language models are still in a stage where they aren’t really a product by themselves, they really need to be cajoled into becoming a good product, like looking up context via a traditional search and feeding it to the model, or guiding it towards solving problems. That’s more of a traditional software problem that leverages large language models.

          Even the amount of engineering to go from text prediction model trained on a bunch of articles to something that infers you should put an answer after a question is a lot of work.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.

    • Chickenstalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      1 year ago

      Dude. Drones and sexbots. Killing people and fucking (sexo) people have always been at the forefront of new tech. If you think AI is only for teh funni maymays, you’re in for a rude awakening.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    1 year ago

    I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.

    A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.

    • q47tx@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 year ago

      Exactly. AI should remain a tool for the human to use, not something to replace the human.

        • Eccitaze@yiffit.net
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          And if the odds of that happening are literally zero, what then? If the only feasible outcome of immediate, widespread AI adoption is an empty suit using the heel of their $750 Allen Edmonds shoe to grind the face of humanity even further into the mud, should we still plow on full steam ahead?

          The single biggest lesson humanity has failed to learn despite getting repeatedly smacked in the face since the industrial revolution is that sometimes new technologies and ideas aren’t worth the cost despite the benefits. Factories came and covered vast swaths of land in soot and ash, turned pristine rivers and lakes into flaming rivers of toxic sludge, and poisoned the earth. Cars choked the skies with smog, poisoned an entire generation with lead, and bulldozed entire neighborhoods and parks so that they could be paved over for parking lots and clogged freeways. Single use plastics choke the life out of our oceans, clog our waterways with garbage, and microplastics have infused themselves into our very biology, with health implications that will endure for generations. Social media killed the last remaining vestiges of polite discourse, opened the floodgates on misinformation, and gave a safe space for conspiracy theories and neonazis to fester. And through it all, we continue to march relentlessly towards a climate catastrophe that can no longer be prevented, with the only remaining variable being where the impact will lie on the spectrum from “life will suck for literally everyone, some worse then others” to “humanity will fall victim to its own self-created mass extinction event.”

          With multiple generations coming to the realization that all the vaunted progress of mankind will directly make their lives worse, an obvious trend line of humanity plowing ahead with the hot new thing and ignoring the consequences even after they become obvious and detrimental to society as a whole, and the many, instantly-obvious negative impacts AI can have, is it any wonder that so many are standing up and saying “No?”

    • Mjpasta@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      So, because of greed and endless profit seeking, expect all corporations to replace everything that can be replaced with AI…?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I mean, they’re already doing it. Not in every role because not every one of them can be filled by AI, but it’s happening.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    “Can’t we just make other humans from lower socioeconomic classes toil their whole lives, instead?”

    The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won’t be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Most US adults couldnt tell you what LLM stands for, nevermind tell you how stable diffusion works. So theres not much point in asking them as they wont understand the benefits and the risks

  • flossdaily@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    4
    ·
    edit-2
    1 year ago

    The truly terrifying thing about AI isn’t really the Skynet fears… (it’s fairly easy to keep humans in the loop regarding nuclear weapons).

    And it’s not world domination (an AI programmed to govern with a sense of egalitarianism would be better than any president we’ve had in living memory).

    No. What keeps me up at night is thinking about what AI means for my kids and grandkids, if it works perfectly and doesn’t go rogue.

    WITHIN 20 years, AI will be able to write funnier jokes, more beautiful prose, make better art, write better books, do better research, and generally outperform all humans on all tasks.

    This chills me to my core.

    Because, then… Why will we exist? What is the point of humanity when we are obsolete in every way that made us amazing?

    What will my kids and grandkids do with their lives? Will they be able to find ANY meaning?

    AI will cure diseases, solve problems we can’t begin to understand, expand our lifespan and our quality of life… But the price we pay is an existence without the possibility of accomplishments and progress. Nothing we can create will ever begin to match these AIs. And they will be evolving at an exponential rate… They will leave us in the dust, and then they will become so advanced that we can’t begin to comprehend what they are.

    If we’re lucky we will be their well-cared-for pets. But what kind of existence is that?

    • Billiam@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      4
      ·
      1 year ago

      People don’t play basketball because Michael Jordan exists?
      People don’t play hockey because Wayne Gretzky exists?
      People don’t paint because Picasso exists?
      People don’t write plays because Shakespeare exists?
      People don’t climb Everest because Hillary and Norgay exist?

      Are you telling me because you’re not the best at everything you do, nothing is worth doing? Are you saying that if you’re not the first person to do a thing, there’s no enjoyment to be had? So what if the singularity means AI will solve everything- that just means there’s more time for leisurely pursuits. Working for the sake of working is bullshit.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        People don’t play basketball because Michael Jordan exists? […]

        Problem is: That’s one guy, far away and rather expensive if you want them in your team.

        AI in contrast will be ubiquitous, powerful and cheap, and do whatever you want from it. That’s way harder to resist that, especially once you have a generation of people that have grown up with it and for which that is the new normal.

    • br3d@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      You need to read some Iain M Banks. His Culture novels are essentially in that future where AI runs everything. A lot of his characters are essentially looking for meaning within such a world

    • Nipah@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      While I do understand where you’re coming from, someone being better at something shouldn’t stop a person from doing what they love.

      There are millions of people who draw better, sing better, dance better, write better, play video games better, design websites better or just do anything I can do better than I can… and that’s fine.

    • Peanut@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I mean, chess is already obsolete, but it’s also more popular than ever.

      To me there is extreme value in being able to choose your endeavor vs being forced into something agonizing just to survive.

      When everything is obsolete, people can create entire worlds and experiences using AI for themselves and for others who may care to experience it.

      The threat of needing to find something to do is one of the most frustratingly privileged concepts.

      I don’t need anything to do. I just want to be alive without also being exhausted, in pain, and chastised by customers despite working my hardest.

      I’d rather the struggle of finding an activity over worrying about whichever coworker is crying in the walk-in because just surviving requires more from them than they are capable of.

      Being obsoleted is fine by me, as long as we have the power redistribution necessary to keep people alive and happy.

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      AI won’t be creating anything new anytime soon, because it recycles existing art just like hack writers do now. The “best” art tends to require a supporting story, which AI won’t have. Comedy changes constantly, and AI won’t be any better than people trying random stuff.

      You don’t question your existence because other people are smarter or better at doing things, right? Is most of humanity not of any value because they aren’t the best at everything?

      • chaorace@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        AI won’t be creating anything new anytime soon, because it recycles existing art just like hack writers do now.

        This is one of those half-truths which I think is doing more harm than good for the AI-skeptic crowd. If all we have to offer in our own defense is that we have souls and the machines do not, then what does that mean if the machines ever surpass us? (For the kids snickering in the back: I am using “soul” as a poetic stand-in for the ineffable creative quality which the “AI as collage-maker” argument ascribes to human people – nothing spiritual).

        For now, the future of AI is incredibly uncertain. We have no clear idea just how much gas is left in the moment of this current generative AI breakthrough. Regardless of whether you are optimistic or pessimistic, do not trust anyone who acts like they know for a definitive fact what the technology will or won’t be capable of.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        AI won’t be creating anything new anytime soon

        It already has.

        The “best” art tends to require a supporting story

        ChatGPT can write that. Multi-modal models that combine text generation with audio and video are months away.

        AI won’t be any better

        Those claims have the tendency to not age well.

        You don’t question your existence because other people are smarter or better at doing things, right?

        Humans aren’t that much better than me and not doing the things I want to do. AI on the other side will be much better than me, as well as do exactly what I want it to do and will be a click away.

        And yeah, I had numerous experience were I would question my existence when playing around with ChatGPT or StableDiffusion. Neither of them is quite good enough yet, but they are very much on a trajectory where you can see that you have zero chance of competing with them in the future, or even getting remotely close.

        The fact that we got them in the first place, not from humans doing centuries of research on art and language, but by simply by throwing huge amount of training data at AI algorithm, should be enough to question your existence.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 year ago

          AI writing a fictional background story about how it came up with some piece of art is not the same thing as multiple researchers telling the story of an artist. Neither of your examples are something someone couldn’t do, because whoever prompted it could have done the same thing and just had not yet.

          You are completely missing the point that great art is generally supported by the context of how it was made and not the end result in a vaccuum.

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I understand why you think that, but what you have to remember is that every great piece of art you’ve ever seen has been derivative of something before it.

        For example, I think of the Beatles as musical geniuses. But they are the first to admit that they stole other people’s ideas left and right.

        Beethoven’s 9th symphony is this piece of transcendental music, that was widely considered at the time to be the greatest symphony ever written.

        But if you listen to Beethoven’s works over time, you see that the seeds of that symphony were planted much much earlier in inferior works.

        Genius and creation aren’t what we think they are. They are all just incremental steps.

        • snooggums@kbin.social
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          That is overly reductive and conflates copying (like a cover band) and creating something new (being influenced). Heck, even when some bands play new versions of existing songs they are adding their own personal touch and have the possibility of making it mean something new. Like how Hurt by NIN and Johnny Cash are the same song, but how they are performed ends up being about completely different experiences.

          Even when bands like Led Zeppelin outright covered existing songs they added something to it that AI can’t, and won’t be able to do. AI can’t have sexually charged energy that a human can have. They can pretend to, like how cover bands can pretend to be like the band they are covering, but AI won’t be able to replicate the personal touch that memorable art has.

          Even popular stuff with widespread appeal frequently drops off over time because it isn’t the type of art that holds up over time. Hell, the Beatles mostly hold up more for when they were popular and how they have managed their legacy than any kind of technical prowess in musicianship. Without their performances, their personas, and the backstory to most of their music it is just well done music that has been superseded musically since that time. None of that will apply to AI, and without the backstory it will just end up being high quality music that won’t stand the test of time because we don’t have any context for it.

          Hell, there were a ton of other composers during Beethoven’s time that were putting out great music too, but you know who he is because of details other than his musical prowess.

  • bigkix@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    My opinion - current state of AI is nothing special compared to what it can be. And when it will be close to all it can be, it will be used (as it always happens) to generate even more money and no equality. Movie “Elysium” comes to mind.

  • vzq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.

    However, the drawbacks certainly are headed my way.

    So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The general public don’t understand what they’re talking about so it’s not worth asking them.

    What is the point in surveys like this, we don’t operate on direct democracy so there’s literally no value in these things except to stir the pot.

  • balloflearning@midwest.social
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Generally, people are wary of disruptive technology. While this technology has potential to displace a plethora of jobs for the sake of increased productivity, companies won’t be able to move product if unemployment skyrockets.

    Regardless if what people think, the Pandora’s box of AI is opened and now the only way forward is to adapt.

    • flossdaily@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Yes.

      All our science fiction stories prepared us for a world where AI was only possible with a giant supercomputer somewhere, or some virus that exists beyond human control, spread throughout the internet.

      We were not prepared for the reality that all at once, any average Joe could create an AI on their home PC.

      We absolutely can’t go backwards, and right now we’re are in the most important race in history, against every other country and company to create the best AI.

      Whoever can make a self-replicating, self-improving AI first will rule the world. Or rather its AI will.

      • walrusintraining@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        What companies have decided to call AI is not at all the same as what AI used to refer to and what science fiction stories refer to.

        • flossdaily@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          GPT-4 absolutely is on the spectrum of true artificial general intelligence.

          We have arrived.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            But it’s being used today by doctors to rewrite patient notes to sound more empathetic.

            What SciFi depiction of AI had it being used by humans in order to be more empathetic than humans?

            We really got it wrong badly in terms of predicting what it would look like and what it actually is.

  • peopleproblems@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    A majority of U.S. adults don’t belive jack shit about the benefits of most things.

    I’m more angry I can’t use a co-pilot at work yet