• GoodEye8@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Well yeah, because dedicated DACs have a tangible benefit of better audio. If you want better audio you need to buy a quality DAC and quality cans.

      I also used to think it’s dumb because who cares as long as you can hear. But then I built a new PC and I don’t know if it was a faulty mobo or just unlucky setup but the internal DAC started picking up static. So I got an external DAC and what I noticed was that the audio sounded clearer and I could hear things in the sound that I couldn’t hear before. It was magical, it’s like someone added new layers into my favorite songs. I had taken the audio crack.

      I pretty quickly gave away my DAC along with my audio technicas because I could feel the urge. I needed another hit. I needed more. I got this knawing itch and I knew I had to get out before the addiction completely took over. Now I live in static because I do not dare to touch the sun again.

      Soundblasters may be shit but the hardware they’re supposed to sell is legit, it has a tangible benefit to whomever can tell the difference. But with AI, what is the tangible benefit that you couldn’t get by getting a better GPU?

  • BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Unless you’re doing music or graphics design there’s no usecase. And if you do, you probably have high end GPU anyway

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I’d rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.

      • BlackLaZoR@kbin.run
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        There are local models for text gen - not as good as chatGPT but at the same time they’re uncensored - so it may or may not be useful

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    The dedicated TPM chip is already being used for side-channel attacks. A new processor running arbitrary code would be a black hat’s wet dream.

      • rtxn@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        TPM-FAIL from 2019. It affects Intel fTPM and some dedicated TPM chips: link

        The latest (at the moment) UEFI vulnerability, UEFIcanhazbufferoverflow is also related to, but not directly caused by, TPM on Intel systems: link

        • barsquid@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          That’s insane. How can they be doing security hardware and leave a timing attack in there?

          Thank you for those links, really interesting stuff.

    • MajorHavoc@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      It will be.

      IoT devices are already getting owned at staggering rates. Adding a learning model that currently cannot be secured is absolutely going to happen, and going to cause a whole new large batch of breaches.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Okay, but here me out. What if the OS got way worse, and then I told you that paying me for the AI feature would restore it to a near-baseline level of original performance? What then, eh?

  • FMT99@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Show the actual use case in a convincing way and people will line up around the block. Generating some funny pictures or making generic suggestions about your calendar won’t cut it.

    • overload@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I completely agree. There are some killer AI apps, but why should AI run on my OS? Recall is a complete disaster of a product and I hope it doesn’t see the light of day, but I’ve no doubt that there’s a place for AI on the PC.

      Whatever application there is in AI at the OS level, it needs to be a trustless system that the user has complete control of. I’d be all for an Open source AI running at that level, but Microsoft is not going to do that because they want to ensure that they control your OS data.

      • PriorityMotif@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Machine learning in the os is a great value add for medium to large companies as it will allow them to track real productivity of office workers and easily replace them. Say goodbye to middle management.

  • Poutinetown@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Tbh this is probably for things like DLSS, captions, etc. Not necessarily for chatbots or generative art.

  • cygnus@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    The biggest surprise here is that as many as 16% are willing to pay more…

    • ShinkanTrain@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 months ago

      I mean, if framegen and supersampling solutions become so good on those chips that regular versions can’t compare I guess I would get the AI version. I wouldn’t pay extra compared to current pricing though

  • ZILtoid1991@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    A big letdown for me is, except with some rare cases, those extra AI features useless outside of AI. Some NPUs are straight out DSPs, they could easily run OpenCL code, others are either designed to not be able to handle any normal floating point numbers but only ones designed for machine learning, or CPU extensions that are just even bigger vector multipliers for select datatypes (AMX).

    • x0x7@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 months ago

      Maybe people doing AI development who want the option of running local models.

      But baking AI into all consumer hardware is dumb. Very few want it. saas AI is a thing. To the degree saas AI doesn’t offer the privacy of local AI, networked local AI on devices you don’t fully control offers even less. So it makes no sense for people who value convenience. It offers no value for people who want privacy. It only offers value to people doing software development who need more playground options, and I can go buy a graphics card myself thank you very much.

    • Honytawk@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago
      • The ones who have investments in AI

      • The ones who listen to the marketing

      • The ones who are big Weird Al fans

      • The ones who didn’t understand the question

    • barfplanet@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I’m interested in hardware that can better run local models. Right now the best bet is a GPU, but I’d be interested in a laptop with dedicated chips for AI that would work with pytorch. I’m a novice but I know it takes forever on my current laptop.

      Not interested in running copilot better though.

  • Buelldozer@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I’m fine with NPUs / TPUs (AI-enhancing hardware) being included with systems because it’s useful for more than just OS shenanigans and commercial generative AI. Do I want Microsoft CoPilot Recall running on that hardware? No.

    However I’ve bought TPUs for things like Frigate servers and various ML projects. For gamers there’s some really cool use cases out there for using local LLMs to generate NPC responses in RPGs. For “Smart Home” enthusiasts things like Home Assistant will be rolling out support for local LLMs later this year to make voice commands more context aware.

    So do I want that hardware in there so I can use it MYSELF for other things? Yes, yes I do. You probably will eventually too.

    • Codilingus@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      I wish someone would make software that utilizes things like a M.2 coral TPU, to enhance gameplay like with frame gen, or up scaling for games and videos. Some GPUs are starting to even put M.2 slots on the GPU, if the latency from Mobo M.2 to PCIe GPU would be too slow.

  • Xenny@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    As with any proprietary hardware on a GPU it all comes down to third party software support and classically if the market isn’t there then it’s not supported.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Assuming theres no catch-on after 3-4 cycles I’d say the tech is either not mature enough, too expensive with too little results or (as you said) theres generally no interest in that.

      Maybe it needs a bit of marturing and a re-introduction at a later point.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      raytracing is something I’d pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
      And they’d need to put it in unasked and cooperate with devs else it won’t catch on quickly enough.
      Remember Nvidia Ansel?