My understanding is that they sent V’Ger a command to do “something,” and then the gibberish it was sending changed, and that was the “here’s everything” signal.
And yeah, I’m calling it V’Ger from now on.
And yeah, I’m calling it V’Ger from now on
Have my upvote.
Why haven’t we been doing this already? I’m with you, let’s make this happen!
Only so long as we ensure we keep a stable humpback whale population. I don’t wanna be the guy that has to figure out how to make a temporal slingshot maneuver work.
This seems to be positive news on that front.
Are we really gonna have to have a time travel based Star Trek Movie for all the species out there to manage to get around to fixing climate change?
From what I read there was damage to the memory in certain places so they’ve had to move the code into spare places in memory.
It’s an astounding feat tbh.
One specific chip had damaged memory
They specifically sent it a command to send a full memory dump after it went haywire. It wasn’t a fluke.
Sure - I didn’t know what “something” was. And what I’d read was that someone had to figure out that they were receiving a full memory dump, which suggested to me that they hadn’t specifically asked for that.
Damn impressive as hell . Also on a completely unrelated note how is this a meme ?
My friend, just let the memes flow. You do not need to understand or gatekeep.
Wait gatekeep ? Which part of my comment looks like i wanna gatekeep anything ?
I’d also be surprised if anyone working on the project was even alive when the code was written.
I think the term “metal” is overused, but this is probably the most metal thing a programmer could possibly do besides join a metal band.
Or activate Skynet.
I wonder how it is secured, or could anyone with a big enough transmitter reprogram it at will…
Lol.
Why is it only sending back dickbutt memes?
Modern satellites are protected by various means of encryption, but there’s an enthusiast community that tracks down and communicates with zombie satellites. There’s even been an NGO which managed to fire rockets on an abandoned NASA/ESA probe before (with their approval.)
The Voyagers benefit primarily from the lack of groups with an adequate deep space network to communicate with it. They’re otherwise completely open and well documented.
Thanks for that link, cool stuff!
“Yeah, I always leave my car unlocked with the keys inside. I also always park it in the center of a lake.”
More like, below the lake.
I think the security is adequately managed by the need for a massive transmitter as well as the question “what is there to gain via a hostile takeover and re-programming the probe?”
I bet there’s actual security of some kind going on, but those two points seem like a massive hurdle to clear just to mess with a deep space probe.
what is there to gain via a hostile takeover and re-programming the probe
“We did it for the lulz”.
They get doom to run on it.
Imagine playing with a 22 hour delay on frames.
You may be interested in learning about it’s downlink: https://destevez.net/2021/09/decoding-voyager-1/
To me, the physics of the situation makes this all the more impressive.
Voyager has a 23 watt radio. That’s about 10x as much power as a cell phone’s radio, but it’s still small. Voyager is so far away it takes 22.5 hours for the signal to get to earth traveling at light speed. This is a radio beam, not a laser, but it’s extraordinarily tight beam for a radio, with the focus only 0.5 degrees wide, but that means it’s still 1000x wider than the earth when it arrives. It’s being received by some of the biggest antennas ever made, but they’re still only 70m wide, so each one only receives a tiny fraction of the power the power transmitted. So, they’re decoding a signal that’s 10^-18 watts.
So, not only are you debugging a system created half a century ago without being able to see or touch it, you’re doing it with a 2-day delay to see what your changes do, and using the most absurdly powerful radios just to send signals.
The computer side of things is also even more impressive than this makes it sound. A memory chip failed. On Earth, you’d probably try to figure that out by physically looking at the hardware, and then probing it with a multimeter or an oscilloscope or something. They couldn’t do that. They had to debug it by watching the program as it ran and as it tried to use this faulty memory chip and failed in interesting ways. They could interact with it, but only on a 2 day delay. They also had to know that any wrong move and the little control they had over it could fail and it would be fully dead.
So, a malfunctioning computer that you can only interact with at 40 bits per second, that takes 2 full days between every send and receive, that has flaky hardware and was designed more than 50 years ago.
Is there a Voyager 1, uh…emulator or something? Like something NASA would use to test the new programming on before hitting send?
Today you would have a physical duplicate of something in orbit to test code changes on before you push code to something in orbit.
Finally I can put some take into this. I’ve worked in memory testing for years and I’ll tell you that it’s actually pretty expected for a memory cell to fail after some time. So much so that what we typically do is build in redundancy into the memory cells. We add more memory cells than we might activate at any given time. When shit goes awry, we can reprogram the memory controller to remap the used memory cells so that the bad cells are mapped out and unused ones are mapped in. We don’t probe memory cells typically unless we’re doing some type of in depth failure analysis. usually we just run a series of algorithms that test each cell and identify which ones aren’t responding correctly, then map those out.
None of this is to diminish the engineering challenges that they faced, just to help give an appreciation for the technical mechanisms we’ve improved over the last few decades
what we typically do is build in redundancy into the memory cells
Do you know how long that has been going on? Because Voyager is pretty old hardware.
pretty expected for a memory cell to fail after some time
50 years is plenty of time for the first memory chip to fail most systems would face total failure by multiple defects in half the time WITH physical maintenance.
Also remember it was built with tools from the 70s. Which is probably an advantage, given everything else is still going
Also remember it was built with tools from the 70s. Which is probably an advantage
Definitely an advantage. Even without planned obsolescence the olden electronics are pretty tolerant of any outside interference compared to the modern ones.
Huh. If it survives a few years more, it’s a lightday away.
They have spare Voyager on Earth for debuggingEDOT: or not
And you explained all of that WITHOUT THE OBNOXIOUS GODDAMNS and FUCKIN SCIENCE AMIRITEs
Oh screw that, that’s an emotional post from somebody sharing their reaction, and I’m fucking STOKED to hear about it, can’t believe I missed the news!
I just have to imagine how interesting of a challenege that is. Kinda like when old games only had 300kb to store all their data on so you had to program cool tricks to get it all to work.
No yeah, it’s like that plus the thing is a light day away, and on top of that malfunctioning on a hardware level. Incredible
It’s like you already have a 300kb game on a cartridge, but it doesn’t work for some unknown reason. Also you don’t actually have the cartridge, some randy in Greenland does. And they only answer emails once every 2 days or so.
Still faster than the average Windows update.
More stable, too.
Absolutely. The computers on Voyager hold the record for being the longest continuously running computer of all time.
Microsoft can’t even release a fix for Window’s recovery partition being too small to stage updates. I had to do it myself, fucking amateurs.
Not to mention what a bitch that partition is when you need to shrink or increase the size of your windows partition. If you need to upgrade your storage, or resize to partition to make room for other operating systems, you have to follow like 20 steps of voodoo magic commands to do it.
The possibility of a catastrophic fuck up is way too high to put this on the average Windows user.
Whoa learned that one at the weekend. Added a new nvme drive, cloned the old drive. I wanted to expand my linux partition, but it was at the start of the drive. So shifted all the windows stuff to the end and grew the Linux partition.
Thought I’d boot into windows to make sure it was okay, just in case (even though I’ve apparently not booted it in 3 years). BSOD. 2-3hrs later it was working again, I’m still not sure what fixed it of I’m honest, I seemed to just rerun the same bootrec commands and repair startup multiple times, but it works now, so yay!
Hiren’s Boot Cd has a handy tool that can fix that bsod. I’ve used it many times.
Can’t or won’t? The same issue exists for both windows 10 and 11, but they haven’t closed the ticket for windows 11… Typical bullshit. It’s not exactly planned obsolescence, but when a bug comes up like that they’re just gonna grab the opportunity to go “sry impossible, plz buy new products”
I didn’t know that. So the ticket is still open for 11 but there’s still no fix?
That is my understanding.
I can’t find the article that I read just yesterday, but this is somewhat the same story: https://www.theregister.com/2024/05/03/microsoft_windows_recovery_environment/
NASA should be in charge of Windows updates!
If they were it wouldn’t be Windows
Windows 13 update log:
Change kernel to Linux.
Build custom OS for astrophysics and space science applications.
happy rocket engineer noises
Now I’m curious. How would a NasaOS look like? Would it even be good for general use? Would they just focus on optimization? Could they finally beat Hannah Montana linux, the superior OS?
I think it would have a real time kernel running parallel to a linux kernel.
Users could interact with the linux kernel normally and schedule trusted real time tasks on the other. Maybe there is reduced security for added performance on those cores.In general use it would be a normal stable system with the allure of a performance mode that will break your system if you are not careful.
Certainly better tested.
Well, they only had to test it for a single hardware deployment. Windows has to be tested for millions if not billions of deployments. Say what you want, but Microsoft testers are god like.
Windows? Hardware testing? Testing in general? LMAOOO
Why do Tumblr users approach every topic like a manic street preacher?
There’s a significant overlap between theatre kids and Tumblr users.
Thank you, now I can’t stop hearing them in Alan Tudyk’s Clayface voice from the Harley Quinn series…
That ven diagram is maybe 3 degrees away from a circle.
A Venn diagram is not a pie chart, they’re all circles.
Like so much overlap of the two circles, it’s almost 1 circle.
Yeah, and we might use a ratio to describe that overlap, not degrees.
The area where it overlaps sometimes isn’t a circle.
OTS flashing.
Like OTA but with space rather than air.
OTV (void)
If we flash our phones here on Earth, we lose our warranty, wtf
Warranty never works anyway
Rejected : please comment your changes
Great documentary on the Voyager team: It’s quieter in the twilight
I prefer the sequel Star Trek: the motion picture.
V’ger 2: 2patch2furious
Keep in mind too these guys are writing and reading in like assembly or some precursor to it.
I can only imagine the number of checks and rechecks they probably go through before they press the “send” button. Especially now.
This is nothing like my loosey goosey programming where I just hit compile or download and just wait to see if my change works the way I expect…
they almost certainly have a hardware spare, or at the very least, an accurately simulated version of it, because again, this is 50 year old hardware. So it’s pretty easy to just simulate it.
But yeah they are almost certainly pulling some really fucked QA on this shit.
I read someplace a while back that the average beginner dev has an error for every 10 lines of code, a working dev, 100, the (I think) US Air Force 1000. NASA (& company )was at a massive single error per 100000 lines of code. I wish I could find that article.
Interviewer: Tell me an interesting debugging story
Interviewee: …
Heh. Years ago during an interview I was explaining how important it is to verify a system before putting it into orbit. If one found problems in orbit, you usually can’t fix it. My interviewer said, “Why not just send up the space shuttle to fix it?”
Well…
I wont even upgrade the BIOS on my motherboard because im afraid of bricking it.
I updated mine a couple of weeks ago. I was actually really anxious as It went through the process, but it worked fine, at first…
Then I found out Microsoft considered it a new computer and deactivated windows. (And that’s when I found out they deleted upgrade licences from windows 7 & 8 back in September)That’s Microsoft in a nutshell for ya.
Posting from Linux then?
Well, a “free” OS anyway.
As a teenager I experienced a power outage while I was updating my bios.
Guess what happened?
I’m still bitter about it.
You can negate that risk by getting a UPS. You should get a UPS in any case imo since even a shitty one lets you at least save your work and shutdown properly if your electricity drops.
Oh yeah, I learned that lesson.
I got a big mean one these days.