Apple rolled out its LLM-powered “Apple Intelligence” update to iPhone, iPad, and Macintosh computer users starting in October. One feature of Apple Intelligence is to summarize multiple push messa…
I’ve been reading (and subscribing to) Ars Technica for a long time (20+ years reading, ~10 year sub).
While they have pretty solid coverage on many topics (science, US public policy, general tech), their coverage of Apple has always been very biased. The Apple fanboys in the comments are also extremely annoying and pathetic.
If you go over to r/singularity they will have the same “it’s an LLM, you can’t expect no errors! AGI here we come” attitude too.
Constantly apologizing for LLMs and promising the next hit will be better/perfect is… it’s like a cult. More so than Apple fans, because they see it as reshaping reality via the singularity.
I’m disappointed that Apple jumped in while the error rate is still high. It’s almost like everyone on the side of modern AI just wants us all to get over it and get use to the errors. And trust in the future iterations.
I’ve been reading (and subscribing to) Ars Technica for a long time (20+ years reading, ~10 year sub).
While they have pretty solid coverage on many topics (science, US public policy, general tech), their coverage of Apple has always been very biased. The Apple fanboys in the comments are also extremely annoying and pathetic.
even the commenters get stuck into Sam Axon this time
If you go over to r/singularity they will have the same “it’s an LLM, you can’t expect no errors! AGI here we come” attitude too.
Constantly apologizing for LLMs and promising the next hit will be better/perfect is… it’s like a cult. More so than Apple fans, because they see it as reshaping reality via the singularity.
I’m disappointed that Apple jumped in while the error rate is still high. It’s almost like everyone on the side of modern AI just wants us all to get over it and get use to the errors. And trust in the future iterations.