Recently, I had a conversation with a junior developer on my team. Let’s call him Alan. We were talking about a new notification feature that was going to be used to send reminder e-mails to potentially thousands of people if they had forgotten to enter certain data in the last month or so. Alan was confident that the code he’d written was correct. “I’ve tested it well.”, he said…
Having a background in astronomy, I knew going into programming that time would be an absolute bitch.
Most recently, I thought I could code a script that could project when Easter would land every year to mark it on office timesheets. After spending an embarrassing amount of…er…time on it, I gave up and downloaded a table of pre-calculated dates. I suppose at some point, assuming the code survives that long, it will have a Y2K-style moment, but I didn’t trust my own algorithm over the table. I do think it is healthy, if not essential, to not trust your own code.
I’d like to add “Splitting at code-point boundary is safe” to your list. Man, was I ever naive!
Rocket programmer (and engineer) here. Can confirm that time programming is hell. Don’t even start on DST.
I’m sure many smaller companies had their own internal Y2K moment as they scaled and became a big hit, and realized they used a wrong datatype like int instead of long or something and shit was gonna break by XYZ date if they did nothing heh.
Imagine if you were the guy who made the call on IPv4 addresses…
Give it long enough and somehow the person who decided on IPv6 will feel the same as every piece of matter we want to interact with can be networked.
I guess the MAC address guy is up next. 48 bits may not go so far if every light bulb is going to want its own.
Apple won’t like that doomsday event lol