• 21 Posts
  • 275 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • Yep.

    There are two big end-user security decisions that are totally mystifying to me about Lemmy. One is automatically embedding images in comments without rehosting the images, and the other is failing to warn people that their upvotes and downvotes are not actually private.

    I’m not trying to sit in judgement of someone who’s writing free software but to me those are both negligent software design from an end-user privacy perspective.


  • Of note about this is that image links in comments aren’t rehosted by Lemmy. That means it would be possible to flood a community with images hosted by a friendly or compromised server, and gather a lot of information about who was reading that community (how many people, and all their IP address and browser fingerprint information, to start with) by what image requests were coming in kicked off by people seeing your spam.

    I didn’t look at the image spam in detail, but if I’m remembering right the little bit of it I looked at, it had images hosted by lemmygrad.ml (which makes sense) and czchan.org (which makes less sense). It could be that after uploading the first two images to Lemmygrad they realized they could just type the Markdown for the original hosting source for the remaining three, of course.

    It would also be possible to use this type of flood posting as a smokescreen for a more targeted plan of sending malware-infected images, or more specifically targeted let’s-track-who-requests-this-image-file images, to a more limited set of recipients.

    Just my paranoid thoughts on the situation.



  • The layers at the tip of his tusk had strontium levels that matched the site where he had been unearthed. The researchers then looked at a layer formed a week before his death and searched a geochemical map for places where Kik might have been that had a matching strontium level. The team worked back through time, week after week, piecing together Kik’s whereabouts over the course of his life.

    As it turned out, Kik grew up far from the northern reaches where he met his end. When he was a young mammoth, he followed his herd around eastern Alaska. In his adult years, Kik moved widely across central Alaska. And in the last 18 months of his life, he ended up on the north side of the Brooks Range, where he likely died of starvation.

    In the new study, published on Wednesday in the journal Science Advances, Dr. Wooller and his colleagues examined Elma’s six-foot-long tusk. Unlike Kik, her remains were found by Chuck Holmes, an archaeologist at the University of Alaska Fairbanks, at the Swan Point archaeological site in Alaska. While Kik died far from people, Elma’s remains ended up in a hunting and fishing camp; she may have been the victim of a hunt.

    THIS IS SO COOL




  • Mozilla/5.0 (Android 10; Mobile; rv:121.0) Gecko/121.0 Firefox/121.0.

    I just did a bunch of testing. The issue is that final version number, “Firefox/121.0”. Google returns very different versions of the page based on what browser you claim to be, and if you’re on mobile Firefox, it gives you different mobile versions depending on your version:

    % wget -O - -nv -U 'Mozilla/5.0 (Android 10; Mobile; rv:62.0) Gecko/121.0 Firefox/41.0' https://www.google.com/ | wc -c
    2024-01-08 15:54:29 URL:https://www.google.com/ [1985] -> "-" [1]
        1985
    % wget -O - -nv -U 'Mozilla/5.0 (Android 10; Mobile; rv:62.0) Gecko/121.0 Firefox/62.0' https://www.google.com/ | wc -c
    2024-01-08 15:54:36 URL:https://www.google.com/ [211455] -> "-" [1]
      211455
    % wget -O - -nv -U 'Mozilla/5.0 (Android 10; Mobile; rv:62.0) Gecko/121.0 Firefox/80.0' https://www.google.com/ | wc -c
    2024-01-08 15:52:24 URL:https://www.google.com/ [15] -> "-" [1]
          15
    % wget -O - -nv -U 'Mozilla/5.0 (Android 10; Mobile; rv:62.0) Gecko/121.0 Firefox/121.0' https://www.google.com/ | wc -c
    2024-01-08 15:52:04 URL:https://www.google.com/ [15] -> "-" [1]
          15
    

    If you’re an early version of Firefox, it gives you a simple page. If you’re a later version of Firefox, it gives you a lot more complete version of the page. If you’re claiming to be a specific version of mobile Firefox, but the version you’re claiming (edit: oopsie doesn’t exist or even really make sense didn’t exist when they set this logic up or something), it gets confused and gives you nothing. You could argue that it should default to some sensible mobile version in this case, and they should definitely fix it, but it seems to me like it’s clearly not malicious.

    Edit: Wait, I am wrong. I didn’t realize Firefox’s version numbers went up so high. It looks like the cutoff for where the blank pages start coming is at version 65, which is like 2012 era, so not real old at all. I still maintain that it’s probably accidental but it looks like it affects basically all modern mobile Firefoxes, yes.




  • This is a great article which unfortunately also does a great job at meandering and overcomplicating the point.

    TL;DR The popular understanding of the Dunning-Kruger effect is that incapable people think they’re smarter than do the capable people. That’s wrong. What the data actually show is that the people tend to estimate their ability level as being somewhere in the middle. So, the incapable people “overestimate” their skill - putting it “a little below average” when it should be further below average than they estimated - and exceptionally capable people do the same on the other side. The correlation between skill estimation and actual skill is actually positive - just not as high as it “should” be - not negative as the popular understanding would suggest. The negative correlation is what you get when you subtract people’s actual skill from their estimated skill, making it arguably just an autocorrelation artifact.

    Lots more data and details in the article but that’s the gist.



  • “It did not occur to me then — and remains surprising to me now — that Mr. Schwartz would drop the cases into his submission wholesale without even confirming that they existed,” Cohen said. “Accordingly, when I saw the citations and descriptions I had sent Mr. Schwartz quoted at length in the draft filing, I assumed that Mr. Schwartz had reviewed and verified that information and deemed it appropriate to submit to the court.”

    Bro:

    Even if this is true, don’t throw your goddamned lawyer under the bus. Just say we’re very sorry, we fucked up, I was the one that researched it initially, we won’t do it again. Finger-pointing about it to the judge does 0% good and a whole lot of bad.

    I like Cohen because he manned up and admitted he was wrong but this was a little reminder to me that POS is still in his DNA.



  • Yeah. To me it seems transparently obvious that at least some of the applications of AI will continue to change the world - maybe in a big way - after the bust that will inevitably happen to the AI-adjacent business side after the current boom. I agree with Doctorow on everything he’s saying about the business side, but that’s not the only side and it’s a little weird that he’s focusing exclusively on that aspect. But what the hell, he’s smart and I hadn’t seen this particular business-side perspective before.






  • Yeah. Seeing the development thought process at work during the engineering of git was really cool. The philosophy was basically, at its core it’s not a version control system. It’s a content-addressable filesystem. That’s what you need in order to build a good distributed version control, so we’ll make two layers and make each one individually very good at what it does. Then in a UI sense, the idea was to give you the tools to be able to do needed operations easily, but still expose the underlying stuff if you need direct access to it. And then to optimize the whole thing to within an inch of its life under the types of workloads it’ll probably be experiencing when being used as version control.

    It was also, as far as I’m aware, the first nontrivial use of something like a blockchain. The property where each commit is referred to by its hash, and the hash encompasses the hash of the previous commit, was a necessary step for security and an obvious-in-retrospect way to identify commits in a unique way.

    Basically the combination of innovative design with a bunch of core concepts that weren’t really commonly in use at the time, combined with excellent engineering to make it all solid and working well, was pretty mind-blowing to see, and it all came together in just a few weeks and started to get used for real in a big sense. Then, the revolution accomplished, Linus handed git off to someone else and everyone just got back to work on the kernel.


  • mo_ztt ✅@lemmy.worldtoGit@programming.devThe World Before Git
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    I’m one of those who was unfortunate enough to use SVN.

    Same. I guess I’m an old guy, because I literally started with RCS, then the big step up that was CVS, and then used CVS for quite some time while it was the standard. SVN was always ass. I can’t even really put my finger on what was so bad about it; I just remember it being an unpleasant experience, for all it was supposed to “fix” the difficulties with CVS. I much preferred CVS. Perforce was fine, and used basically the exact same model as SVN just with some polish, so I think the issue was the performance and interface.

    Also, my god, you gave me flashbacks to the days when a merge conflict would dump the details of the conflict into your source file and you’d have to go in and clean it up manually in the editor. I’d forgotten about that. It wasn’t pleasant.

    Git interface really shows the fact that it is created by kernel developers for kernel developers (more on this later).

    Yeah, absolutely. I was going to talk about this a little but my thing was already long. The two most notable features of git are its high performance and its incredibly cryptic interface, and knowing the history makes it make a lot of sense why that is.

    Mercurial interface, on the other hand is well thought out and easy to figure out. This is surprising because both Git and Mercurial share a similar model of revisions. Mercurial was born a few days after Git. It even stood a chance for winning the race to become the dominant VCS. But Mercurial lost kernel developers’ mindshare due to Python - it simply wasn’t as fast as Git.

    Yeah. I was present on the linux-kernel mailing list while all this was going on, purely as a fanboy, and I remember Linus’s fanatical attention to performance as a key consideration at every stage. I actually remember there was some level of skepticism about the philosophy of “just download the whole history from the beginning of time to your local machine if you want to do anything” – like the time and space requirements in order to do that probably wouldn’t be feasible for a massive source tree with a long history. Now that it’s reality, it doesn’t seem weird, but at the time it seemed like a pretty outlandish approach, because with the VCS technologies that existed at the time it would have been murder. But, the kernel developers are not lacking in engineering capabilities, and clean design and several rounds of optimization to figure out clever ways to tighten things up made it work fine, and now it’s normal.

    Perhaps this is no more apparent than in the case of quilt. Quilt is a software that is used to manage a ‘stack of patches’. It gives you the ability to absorb changes to source code into a patch and apply or remove a set of patches. This is as close you can get to a VCS without being a VCS. Kernel devs still use quilt sometimes and exchange quilt patch stacks. Git even has a command for importing quilt patch stacks - git-quiltimport. There are even tools that integrate patch stacks into Git - like stgit. If you haven’t tried it yet, you should. It’s hard to predict if you’ll like it. But if you do, it becomes a powerful tool in your arsenal. It’s like rebase on steroids. (aside: This functionality is built into mercurial).

    That’s cool. Yeah, I’ll look into it; I have no need of it for any real work I’m doing right now but it sounds like a good tool to be familiar with.

    I still remember the days of big changes to the kernel being sent to the mailing list as massive series of organized patchsets (like 20 or more messages with each one having a pretty nontrivial patchset to implement some piece of the change), with each patch set as a conceptually distinct change, so you could review them one at a time and at the end understand the whole huge change from start to finish and apply it to your tree if you wanted to. Stuff like that was why I read the mailing list; I just remember being in awe of the type of engineering chops and the diligence applied to everyone working together that was on display.

    I recently got into packaging for Linux. Trust me - there’s nothing as easy or convenient as dealing with patches. It’s closer to plain vanilla files than any VCS ever was.

    Agreed. I was a little critical-sounding of diff and patch as a system, but honestly patches are great; there’s a reason they used that system for so long.

    As I understand, the biggest problem was that not everyone was given equal access. Most significantly, many developers didn’t have access to the repo metadata. The metadata that was necessary to perform things like blame, bisect or even diffs.

    Sounds right. It sounds like your memory on it is better than mine, but I remember there being some sort of “export” where people who didn’t want to use bk could look at the kernel source tree as a linear sequence of commits (i.e. not really making it clear what had happened if someone merged together two sequences of commits that had been developed separately for a while). It wasn’t good enough to do necessary work, more just a stopgap if someone needed to check out the current development head or something, and that’s it.

    That sounds accurate. To add more context, it was Andrew Tridgell who ‘reverse engineered’ it. He became the target of Torvald’s ire due to this. He did reveal his ‘reverse engineering’ later. He telnetted into the server and typed ‘help’.

    😆

    I’ll update my comment to reflect this history, since I didn’t remember this level of detail.


  • mo_ztt ✅@lemmy.worldtoGit@programming.devThe World Before Git
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    7 months ago

    I know right? I was all excited when I saw the OP article because I was like, oh cool, someone’s telling the story about this neat little piece of computing history. Then I went and read it and it was like “ChatGPT please tell me about the history of source control in a fairly boring style, with one short paragraph devoted to each applicable piece of technology.”