Just need the right name for it. Soundblasters are still being produced aren’t they? There’s always a market.
Well yeah, because dedicated DACs have a tangible benefit of better audio. If you want better audio you need to buy a quality DAC and quality cans.
I also used to think it’s dumb because who cares as long as you can hear. But then I built a new PC and I don’t know if it was a faulty mobo or just unlucky setup but the internal DAC started picking up static. So I got an external DAC and what I noticed was that the audio sounded clearer and I could hear things in the sound that I couldn’t hear before. It was magical, it’s like someone added new layers into my favorite songs. I had taken the audio crack.
I pretty quickly gave away my DAC along with my audio technicas because I could feel the urge. I needed another hit. I needed more. I got this knawing itch and I knew I had to get out before the addiction completely took over. Now I live in static because I do not dare to touch the sun again.
Soundblasters may be shit but the hardware they’re supposed to sell is legit, it has a tangible benefit to whomever can tell the difference. But with AI, what is the tangible benefit that you couldn’t get by getting a better GPU?
Poll shows 84% of PC users are suckers.
You like having to pay more for AI?
I feel like the sarcasm was pretty obvious in that comment, but maybe I’m missing something.
Unless you’re doing music or graphics design there’s no usecase. And if you do, you probably have high end GPU anyway
I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I’d rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.
There are local models for text gen - not as good as chatGPT but at the same time they’re uncensored - so it may or may not be useful
The dedicated TPM chip is already being used for side-channel attacks. A new processor running arbitrary code would be a black hat’s wet dream.
It’s not a full CPU. It’s more limited than GPU.
That’s why I wrote “processor” and not CPU.
Do you have an article on that handy? I like reading about side channel and timing attacks.
That’s insane. How can they be doing security hardware and leave a timing attack in there?
Thank you for those links, really interesting stuff.
It will be.
IoT devices are already getting owned at staggering rates. Adding a learning model that currently cannot be secured is absolutely going to happen, and going to cause a whole new large batch of breaches.
The “s” in IoT stands for “security”
I’m willing to pay extra for software that isn’t
Okay, but here me out. What if the OS got way worse, and then I told you that paying me for the AI feature would restore it to a near-baseline level of original performance? What then, eh?
I already moved to Linux. Windows is basically doing this already.
One word. Linux.
Show the actual use case in a convincing way and people will line up around the block. Generating some funny pictures or making generic suggestions about your calendar won’t cut it.
I completely agree. There are some killer AI apps, but why should AI run on my OS? Recall is a complete disaster of a product and I hope it doesn’t see the light of day, but I’ve no doubt that there’s a place for AI on the PC.
Whatever application there is in AI at the OS level, it needs to be a trustless system that the user has complete control of. I’d be all for an Open source AI running at that level, but Microsoft is not going to do that because they want to ensure that they control your OS data.
Machine learning in the os is a great value add for medium to large companies as it will allow them to track real productivity of office workers and easily replace them. Say goodbye to middle management.
Tbh this is probably for things like DLSS, captions, etc. Not necessarily for chatbots or generative art.
The biggest surprise here is that as many as 16% are willing to pay more…
Acktually it’s 7% that would pay, with the remainder ‘unsure’
I mean, if framegen and supersampling solutions become so good on those chips that regular versions can’t compare I guess I would get the AI version. I wouldn’t pay extra compared to current pricing though
A big letdown for me is, except with some rare cases, those extra AI features useless outside of AI. Some NPUs are straight out DSPs, they could easily run OpenCL code, others are either designed to not be able to handle any normal floating point numbers but only ones designed for machine learning, or CPU extensions that are just even bigger vector multipliers for select datatypes (AMX).
Who in the heck are the 16%
Maybe people doing AI development who want the option of running local models.
But baking AI into all consumer hardware is dumb. Very few want it. saas AI is a thing. To the degree saas AI doesn’t offer the privacy of local AI, networked local AI on devices you don’t fully control offers even less. So it makes no sense for people who value convenience. It offers no value for people who want privacy. It only offers value to people doing software development who need more playground options, and I can go buy a graphics card myself thank you very much.
-
The ones who have investments in AI
-
The ones who listen to the marketing
-
The ones who are big Weird Al fans
-
The ones who didn’t understand the question
I would pay for Weird-Al enhanced PC hardware.
Those Weird Al fans will be very disappointed
-
I’m interested in hardware that can better run local models. Right now the best bet is a GPU, but I’d be interested in a laptop with dedicated chips for AI that would work with pytorch. I’m a novice but I know it takes forever on my current laptop.
Not interested in running copilot better though.
Let me put it in lamens terms… FUCK AI… Thanks, have a great day
FYI the term is “layman’s”, as of you were using the language of a layman, or someone who is not specifically experienced in the topic.
Sounds like something a lameman would say
Well, when life hands you lémons…
I’m fine with NPUs / TPUs (AI-enhancing hardware) being included with systems because it’s useful for more than just OS shenanigans and commercial generative AI. Do I want Microsoft CoPilot Recall running on that hardware? No.
However I’ve bought TPUs for things like Frigate servers and various ML projects. For gamers there’s some really cool use cases out there for using local LLMs to generate NPC responses in RPGs. For “Smart Home” enthusiasts things like Home Assistant will be rolling out support for local LLMs later this year to make voice commands more context aware.
So do I want that hardware in there so I can use it MYSELF for other things? Yes, yes I do. You probably will eventually too.
I wish someone would make software that utilizes things like a M.2 coral TPU, to enhance gameplay like with frame gen, or up scaling for games and videos. Some GPUs are starting to even put M.2 slots on the GPU, if the latency from Mobo M.2 to PCIe GPU would be too slow.
As with any proprietary hardware on a GPU it all comes down to third party software support and classically if the market isn’t there then it’s not supported.
Assuming theres no catch-on after 3-4 cycles I’d say the tech is either not mature enough, too expensive with too little results or (as you said) theres generally no interest in that.
Maybe it needs a bit of marturing and a re-introduction at a later point.
I am generally unwilling to pay extra for features I don’t need and didn’t ask for.
raytracing is something I’d pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
And they’d need to put it in unasked and cooperate with devs else it won’t catch on quickly enough.
Remember Nvidia Ansel?