Just in time for midsommar! Yeah, it’s pretty unique. I’ll make sure to drink some today for you 😄
(Justin)
Tech nerd from Sweden
Just in time for midsommar! Yeah, it’s pretty unique. I’ll make sure to drink some today for you 😄
I haven’t really looked into it, but it doesn’t seem like it.
Heres the documentation about having multiple cidr pools in one cluster with the Cilium network driver, and it seems to imply that each Pod only gets one IP.
https://docs.cilium.io/en/stable/network/concepts/ipam/multi-pool/
There’s something called Multus that I haven’t looked into, but even then it looks like that is for multiple interfaces per Pod, not multiple IPS per interface.
https://github.com/k8snetworkplumbingwg/multus-cni
Containers are just network namespaces on Linux, and all the routing is done in iptables or ebpf, so it’s theoretically possible to have multiple IP addresses, but doesn’t look like anybody has started implementing it. There’s actually a lot of Kubernetes clusters that just use stateful IPv6 NAT for the internal Pod network, unfortunately.
Yeah, I wonder if there’s any proposals to allow for multiple IPV6 addresses in Kubernetes, it would be a much better solution than NAT.
As far as I know, it’s currently not possible. Every container/Pod receives a single IPv4 and/or IPv6 address on creation from the networking driver.
I have static IPs for my Kubernetes nodes, and I actually use DHCPv6 for dynamic dns so I can reach any device with a hostname, even though most of my devices don’t have static IPs.
The issue is those static IPs are tied to my current ISP, preventing me from changing ISPs without deleting my entire Kubernetes cluster.
Hurricane Electric gives me a /48.
Site-local ipv6 would work here as well, true. But then my containers wouldnt have internet access. Kubernetes containers use Ipam with a single subnet, they can’t use SLAAC.
1:1 stateless NAT is useful for static IPs. Since all your addresses are otherwise global, if you need to switch providers or give up your /64, then you’ll need to re-address your static addresses. Instead, you can give your machines static private IPs, and just translate the prefix when going through NAT. It’s a lot less horrible than IPv4 NAT since there’s no connection tracking needed.
This is something I probably should have done setting up my home Kubernetes cluster. My current IPv6 prefix is from Hurricane Electric, and if my ISP ever gives me a real IPv6 prefix, I will have to delete the entire cluster and recreate it with the new prefix.
I’m pretty sure Swedish engineers have studied this extensively. There’s plenty of streets in the cities that ban studded tires, and there’s harsh fines if you use studded tires outside of winter.
Entire Floridian cities will be lost this hurricane season.
Cheater bots only make up like 1% of the bots in tf2.
The game is dying, the vast majority of players are bots that idle for items.
Overhyped cpus, appealing to people who don’t understand cpu design.
I just don’t see Qualcomm competing on performance per watt any time soon, let alone on software compatibility.
All of those players on the steam charts are literally bots
Time for the EU to regulate including digital goods in estates?
Arrowhead is a very small company. Company details are actually public knowledge in Sweden, and you can see that arrowhead only has 4 board members, two of them being the CEO and vice CEO: https://www.allabolag.se/5567796544/befattningar
Crunchbase seems to think they don’t have any venture capital, so it’s possible that the board members are the only shareholders here, but who knows.
1, These days the machines used to etch chips (flash light onto the chips to carve them out) are mostly made by ASML. The most modern machines are the ASML Twinscan NXE and Twinscan EXE. The raw silicon is coated with different chemicals that react to light, and when the light patterns are flashed onto the silicon, it carves physical arrangements of atoms on the silicon that forms complex electrical circuits.
CPUs were literally drawn by hand, and then the drawing was shrunk down with a magnifying glass back in the day. Programs could be written into electrical memory with physical switches (think 100 light switches), punch cards, or electric typewriters. You could pause the computer so that it would wait for you to type in the next program for it to run. By the time we had kernels, we already had large memory banks in the kilobytes that could store the OS between program runs. So you’d type in the OS once when you turned on the computer, and it would keep in in memory until you turned the computer off again.
The internet is different computers connected together. This website is just data sitting on a server somewhere, and your computer connects to the server over the internet and asks for the data.
Everything is built on the shoulders of giants. There is plenty to learn, but there will always be something you don’t know.
There’s tons of information online if you know where to look. There’s also some good courses out there to understand more specific things like cpu design, networking, programming, etc. In university these sorts of questions fall into the field of Computer Engineering, if you’re looking for a university program to get into.
With regards to the limits of programming: Making websites is already challenging enough, but the cutting edge can be rewarding too :) Software Engineering is a massive field with infinite opportunities. Start small and work your way towards more complex projects with larger teams.
Here’s a good 20 minute video about the history of making microchips: https://youtu.be/Pt9NEnWmyMo
AMD’s c cores aren’t quite the same as Intel’s e cores. Intel’s e-cores are 1/4 of the size of their P cores, while AMD’s c cores are about the same size as their standard cores, but a bit more square shaped geometrically.
Intel’s e cores are completely different architectures from their p cores, while the only difference between AMD’s cores are a bit less cache and a bit lower frequency.
Intel’s are like comparing an Raspberry pi core to a full x86 core, while AMD’s is like a lower binned regular core.
AMD has “big” cores, too. Their 3d vcache models trade multithreaded performance for more cache. Their “3 core tiers” approach is very obvious in their server line up:
https://www.servethehome.com/amd-epyc-bergamo-epyc-9754-cloud-native-sp5/
I remember people talking about 1000hz being the holy grail for vr headsets, though so it seems like there’s more consensus on 1000 Hz being a good limit. Frame time is just the inverse of hz.
But yeah ive personally only used 144hz, I think I could see a difference with 204hz, but I’m not sure if I’d be able to discern 480 or 1000hz outside of maybe VR.
1000 Hz seems to be close to the limit human of human vision, since we stop seeing motion blur above 1000hz. Seems like a good endpoint for display technology.
yeah, Id recommend switching on your secondary machine, so you can try it out and use it properly, but not get frustrated if it does something you don’t expect.
I’m using IPv6 on Kubernetes and it’s amazing. Every Pod has its own global IP address. There is no NAT and no giant ARP routing table slowing down the other computers on my network. Each of my nodes announces a /112 for itself to my router, allowing it to give addresses to over 65k pods. There is no feasible limit to the amount of IP addresses I could assign to my containers and load balancers, and no routing overhead. I have no need for port forwarding on my router or worrying about dynamic IPs, since I just have a /80 block with no firewall that I assign to my public facing load balancers.
Of course, I only have around 300 pods on my cluster, and realistically, it’s not really possible for there to be over 1 million containers in current kubernetes clusters, due to other limitations. But it is still a huge upgrade in reducing overhead and complexity, and increasing scale.