• 1 Post
  • 47 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle

  • Absolutely this. Phones are the primary device for Gen Z. Phone use doesn’t develop tech skills because there’s barely anything you can do with the phones. This is particularly true with iOS, but still applies to Android.

    Even as an IT administrator, there’s hardly anything I can do when troubleshooting phone problems. Oh, push notifications aren’t going through? Well, there are no useful logs or anything for me to look at, so…cool. It makes me crazy how little visibility I have into anything on iPhones or iPads. And nobody manages “Android” in general; at best they manage like two specific models of one specific brand (usually Samsung or Google). It’s impossible to manage arbitrary Android phones because there’s so little standardization and so little control over the software in the general case.










  • Ah, somehow I didn’t see 18 there and only looked at 17. Thanks!

    I tried pulling just the one package from the sid repo, but that created a cascade of dependencies, including all of llvm. I was able to get those files installed but not able to get clinfo to succeed. I also tried installing llvm-19 from the repo at https://apt.llvm.org/, with similar results. clinfo didn’t throw the fatal errors anymore, but it didn’t work, either. It still reported Number of devices 0 and OpenCL-based tools crashed anyway. Not with the same error, but with something generic about not finding a device or possibly having corrupt drivers.

    Should I bite the bullet and do a full ugprade to sid, or is there some way to this more precisely that won’t muck up Bookworm?








  • If you click the Chat button on a DDG search page, it says:

    DuckDuckGo AI Chat is a private AI-powered chat service that currently supports OpenAI’s GPT-3.5 and Anthropic’s Claude chat models.

    So at minimum they are sharing data with one additional third party, either OpenAI or Anthropic depending on which model you choose.

    OpenAI and Anthropic have similar terms and conditions for enterprise customers. They are not completely transparent and any given enterprise could have their own custom license terms, but my understanding is that they generally will not store queries or use them for training purposes. You’d better seek clarification from DDG. I was not able to find information on this in DDG’s privacy policy.

    Obviously, this is not legal advice, and I do not speak for any of these companies. This is just my understanding based on the last time I looked over the OpenAI and Anthropic privacy policies, which was a few months ago.


  • Not sure if you’re referring to the graphics or to the shitty bench design. If the latter…it’s a real thing. :(

    They’re called “leaning benches” or “lean bars”. This bench design is sort of “futuristic” in the sense that adoption has only recently started taking off around the world. They are a user-hostile design made specifically to prevent people (specifically homeless people) from lying down, sleeping, or otherwise, y’know, using it as a goddamn bench. Because removing the ability for anyone to sit down is apparently, in the eyes of authorities, a small price to pay to make homeless people’s lives that much harder.

    The Wikipedia article for “Leaning bench” redirects to hostile architecture, where you can read more about this and similar efforts, if you are in the mood to be enraged at the sheer malice of bureaucrats.

    I’ve seen them in several cities across America. NYC starting rolling them out within the past decade and you’ll see them in any recently renovated station. See https://www.nydailynews.com/2017/09/11/subway-riders-slam-brooklyn-stations-new-leaning-bars-as-incredibly-unwelcoming/ (scroll through the image slideshow to see the new).

    Not sure if the image embed will work here but I’ll try:


  • Yeah, I wouldn’t be too confident in Facebook’s implementation, and I certainly don’t believe that their interests are aligned with their users’.

    That said, it seems like we’re reaching a turning point for big tech, where having access to private user data becomes more of a liability than an asset. Having access to the data means that they will be required by law to provide that data to governments in various circumstances. They might have other legal obligations in how they handle, store, and process that data. All of this comes with costs in terms of person-hours and infrastructure. Google specifically cited this is a reason they are moving Android location history on-device; they don’t want to deal with law enforcement constantly asking them to spy on people. It’s not because they give a shit about user privacy; it’s because they’re tired of providing law enforcement with free labor.

    I suspect it also helps them comply with some of the recent privacy protection laws in the EU, though I’m not 100% sure on that. Again, this is a liability issue for them, not a user-privacy issue.

    Also, how much valuable information were they getting from private messages in the first place? Considering how much people willingly put out in the open, and how much can be inferred simply by the metadata they still have access to (e.g. the social graph), it seems likely that the actual message data was largely redundant or superfluous. Facebook is certainly in position to measure this objectively.

    The social graph is powerful, and if you really care about privacy, you need to worry about it. If you’re a journalist, whistleblower, or political dissident, you absolutely do not want Facebook (and by extension governments) to know who you talk you or when. It doesn’t matter if they don’t know what you’re saying; the association alone is enough to blow your cover.

    The metadata problem is common to a lot of platforms. Even Signal cannot use E2EE for metadata; they need to know who you’re communicating with in order to deliver your messages to them. Signal doesn’t retain that metadata, but ultimately you need to take their word on that.