DefederateLemmyMl

  • Gen𝕏
  • Engineer ⚙
  • Techie 💻
  • Linux user 🐧
  • Ukraine supporter 🇺🇦
  • Pro science 💉
  • Dutch speaker
  • 0 Posts
  • 181 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle


  • We are talking about addresses, not counters. An inherently hierarchical one at that. If you don’t use the bits you are actually wasting them.

    Bullshit.

    I have a 64-bit computer, it can address up to 18.4 exabytes, but my computer only has 32GB, so I will never use the vast majority that address space. Am I “wasting” it?

    All the 128 bits are used in IPv6. ;)

    Yes they are all “used” but you don’t need them. We are not using 2^128 ip addresses in the world. In your own terminology: you are using 4 registers for a 2 register problem. That is much more wasteful in terms of hardware than using 40 bits to represent an ip address and wasting 24 bits.


  • you are wasting 24 bits of a 64-bit register

    You’re not “wasting” them if you just don’t need the extra bits, Are you wasting a 32-bit integer if your program only ever counts up to 1000000?

    Even so when you do start to need them, you can gradually make the other bits available in the form of more octets. Like you can just define it as a.b.c.d.e = 0.a.b.c.d.e = 0.0.a.b.c.d.e = 0.0.0.a.b.c.d.e

    Recall that IPv6 came out just a year before the Nintendo 64

    If you’re worried about wasting registers it makes even less sense to switch from a 32-bit addressing space to a 128-bit one in one go.

    Anyway, your explanation is a perfect example of “second system effect” at work. You get all caught up in the mistakes of the first system, in casu the lack of addressing bits, and then you go all out to correct those mistakes for your second system while ignoring the real world implications of your choices. And now you are surprised that nobody wants to use your 128-bit abomination.




  • You don’t even have to NAT the fuck out of your network. NAT is usually only needed in one place: where your internal network meets the outside world, and it provides a clean separation between the two as well, which I like.

    For most internal networks there really are no advantages to moving to IPv6 other than bragging rights.

    The more I think about it, the more I find IPv6 a huge overly complicated mistake. For the issue they wanted to solve, worldwide public IP shortage, they could have just added an octet to IPv4 to multiply the number of available addresses with 256 and called it a day. Not every square cm of the planet needs a public IP.


  • It’s when you have to set static routes and such.

    For example I have a couple of locations tied together with a Wireguard site-to-site VPN, each with several subnets. I had to write wg config files and set static routes with hardcoded subnets and IP addresses. Writing the wg config files and getting it working was already a bit daunting with IPv4, because I was also wrapping my head around wireguard concepts at the same time. It would have been so much worse to debug with IPv6 unreadable subnet names.

    Network ACLs and firewall rules are another thing where you have to work with raw IPv6 addresses. For example: let’s say you have a Samba share or proxy server that you only want to be accessible from one specific subnet, you have to use IPv6 addresses. You can’t solve that with DNS names.

    Anyway my point is: the idea that you can simply avoid IPv6’s complexity by using DNS names is just wrong.



  • Nah, reddit restoring comments is a myth. You just didn’t delete all your comments, even though probably you thought you did.

    See, Reddit, being the duplicitous bitch that it is, doesn’t really show you all your comments when you go to your profile. It’s limited to your last 1000 (?) comments or so, any comment that goes beyond that horizon is gone from your view forever, but it still exists in the thread.

    The way to solve that is to first do a GDPR request. After a few weeks you will receive a zip file containing a file with all your comments and a link to it. You can then use an overwrite and delete tool and point it to this information. It will likely run for several hours or even days, depending on how many comments you made, because reddit throttles edit and delete requests, but it will effectively delete everything.







  • I ran it perfectly on a 33MHz 486 with 4mb RAM for a long time. Even Doom II with some of its heavier maps ran fine.

    “Perfectly” would mean it ran at 35fps, the maximum framerate DOS Doom is capped at. In the standard Doom benchmark, a dx33 gets about half that: 18fps average in demo3 of the shareware version with the window size reduced 1 step. Demo3 runs on E1M7, which isn’t the heaviest map, so heavier maps would bog the dx33 down even more.

    I’m sure you found that acceptable at the time, and that you look back on it with slightly rose-tinted glasses of nostalgia, but a dx2/66 and preferably even better definitely gave you a much better experience, which was my point.


  • If anyone can enlighten me, This is pretty much why you can find DooM on almost any platform BECAUSE of its Linux code port roots?

    I mean yeah. Doom was extremely popular and had a huge cultural impact in the 90s. It was also the first game of that magnitude of which the source was freely released. So naturally people tried to port it to everything, and “but can it run Doom?” became a meme on its own.

    It also helps that the system requirements are very modest by today’s standards.


  • It ran like absolute ass on 386 hardware though, and it required at least 4MB of RAM which was also not so common for 386 computers. Source: I had a 386 at the time, couldn’t play Doom until I got a Pentium a few years later.

    Even on lower clocked 486 hardware it wasn’t that great. IIRC, it needed about a 486 DX2/66 to really start to shine.



  • Without knowing what was being hosted, the only surefire way would be pulling a complete disk image with cat or dd.

    That’s not surefire, unless you’re doing it offline. If the data is in motion (like a database that’s being updated), you will end up with an inconsistent or corrupt backup.

    Surefire in that case would be something like an lvm snapshot.

    If you wanted to stay on a similar system, RHEL 9 would be a good option or one of its “as similar as possible” like AlmaLinux.

    No love for Rocky?

    Also Oracle Linux is still free, and fully compatible with RHEL.