• 1 Post
  • 619 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • merc@sh.itjust.workstoScience Memes@mander.xyzMythbusters
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Just to clarify; you understand that because the engines are pushing on the plane itself and not the wheels, by the time the wheels start moving, the plane is already moving relative to ground and air alike.

    The wheels are attached to the plane so they move at the same time as the plane. But, I get what you’re trying to say, that the wheels are effectively being dragged by the plane, they’re not powering the movement. But, what you need to think about is that if you oppose that dragging by moving the conveyor belt in the opposite direction you can prevent the plane from moving at all. Yes, the wheels are merely dragging and there isn’t a lot of friction there, but friction increases with speed. And, if you move the conveyor belt fast enough, you can stop the plane from moving relative to the ground, which can stop it from moving relative to the air, which can prevent it from taking off.

    An anchor sufficient to keep the plane from rolling forward is different because the force it is apply is significantly greater.

    No, by definition it’s the same. The conveyor moves with however much speed is necessary to stop the forward motion of the plane. The conveyor would eventually go so fast that it generated enough force to stop the plane from moving, so it’s indistinguishable from an anchor.

    Sure, you can deflate the tires and increase the rate of spin on the wheels.

    You don’t need to deflate the tires, you merely need to increase the speed at which the conveyor moves to match the speed of the wheels.

    if we assume the wheels are indestructible, which I’d argue is only fair, then even if what you’re saying is true and we ramp up the drag induced by the wheels sufficient to counter the engines… then the wind generated by the rolling treadmill would be producing a sufficient headwind for the plane to take off

    That seems like an unfair assumption because you’re assuming that the conveyor belt has second-order effects on the air (i.e. generating a “wind” over the wings of the plane), while ignoring the second-order effects the conveyor would have on the wheels (massive heat from friction leading to failure).

    On the other hand, this entire conversation assumes the thrust to weight ratio is less than 1. If it’s more than one, well they just…. Go straight up.

    I mean, the discussion is of a plane, not a helicopter or a rocket. Even jet fighters with a thrust-to-weight ratio of more than 1 typically have engines that only have that ratio once they’re at high speed, not from a standing start. That’s why even fighter jets on carriers need a catapult-assisted takeoff. A VTOL aircraft like a Harrier wouldn’t need that, but then its takeoff speed is zero, and the myth isn’t very interesting when the conveyor belt doesn’t move.


  • merc@sh.itjust.workstoScience Memes@mander.xyzMythbusters
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    If the conveyor moves at the same speed as the wheels, it is exactly like attaching an anchor. That isn’t the myth they were testing, but it’s a more interesting myth.

    it can’t do that through the wheels- the wheels can only apply a force equal to their rolling resistance and friction in its mechanics.

    It can do that if it can spin the wheels fast enough. Picture the ultra-light airplane from the episode with big, bouncy wheels and a relatively weak propeller. If the treadmill was moving 1000 km/h backwards, that little propeller could never match the force due to rolling resistance from the wheels.


  • merc@sh.itjust.workstoScience Memes@mander.xyzMythbusters
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    at what point does this become true?

    It’s always true.

    A stationary aeroplane on a treadmill will obviously move with the treadmill

    What do you mean? The plane has its parking brakes on and moves with the treadmill surface? If you don’t have parking brakes engaged and start up a treadmill under a plane, the plane’s wheels will spin and the plane will stay pretty much in one place. Because the wheels are free to spin, initially that’s all that will happen. The inertia of the plane will keep it in place while the wheels spin. Over time, the plane will start to drift in the direction the treadmill is moving, but it will never move as fast as the treadmill because there’s also friction from the air, and that’s going to be a much bigger factor.

    I assume an aeroplane moving at like 1 km/h still gets pulled backward by the treadmill.

    Moving at 1 km/h relative to what? The surface of the treadmill or the “world frame”? A plane on a moving treadmill will be pulled by the treadmill – there will be friction in the wheels, but it will also feel a force from the air. As soon as the pilot fires up the engine, the force from the engine will be much higher than any tiny amount of friction in the wheels from the treadmill.

    but how does it get lift if it is prevented from accelerating from 0 to 1 km/h of ground speed

    It isn’t prevented from accelerating from 0 to 1 km/h of ground speed. The wheels are spinning furiously, but they’re relatively frictionless. If the pilot didn’t start up the propeller, the plane would start to move in the direction the treadmill is pulling, but would never quite reach the speed of the treadmill due to air resistance. But, as soon as the pilot fires up the propeller, it works basically as normal. A little bit of the air will be moving backwards due to the treadmill, but most of the air will still be relatively stationary, so it’s easy to move the plane through the air quicker and quicker until it reaches take-off speed.


  • merc@sh.itjust.workstoScience Memes@mander.xyzMythbusters
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    I think the confusion is that the conveyor belt is running at a fixed speed, which is the aircraft’s takeoff speed. That just dictates how fast the wheels spin, but since the plane generates thrust with its propeller, the wheels just end up having to spin at double takeoff speed. Since they’re relatively frictionless, that’s easy.

    The more confusing myth is the one where the speed of the conveyor belt is variable, and it always moves at the same speed as the wheels. So, at the beginning the conveyor belt isn’t moving, but as soon as the plane starts to move, and its wheels start to spin, the conveyor belt movies in the opposite direction. In that case, the plane can’t take off. That’s basically like attaching an anchor to the plane’s frame, so no matter how fast the propeller spins, the airplane can’t move.




  • merc@sh.itjust.workstoScience Memes@mander.xyzAcademia to Industry
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 days ago

    PhD level intelligence? Sounds about right.

    Extremely narrow field of expertise ✔️
    Misplaced confidence in its abilities outside its area of expertise ✔️
    A mind filled with millions of things that have been read, and near zero from interactions with real people✔️
    An obsession over how many words can get published over the quality and correctness of those words ✔️
    A lack of social skills ✔️
    A complete lack of familiarity of how things work in the real world ✔️





  • merc@sh.itjust.workstoScience Memes@mander.xyzIron
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Do you mean just the edge? Because with a sword basically the whole thing other than the handle is the blade.

    But yeah, with a tiny diamond edge you’d probably have the best of both worlds, a light, flexible sword with an ultra-sharp cutting edge.

    Still, the edge probably wouldn’t last for long. If the diamond was attached to a steel blade and the blade flexed, the diamond couldn’t flex and would probably snap.




  • merc@sh.itjust.workstoScience Memes@mander.xyzVoyager 1
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    To me, the physics of the situation makes this all the more impressive.

    Voyager has a 23 watt radio. That’s about 10x as much power as a cell phone’s radio, but it’s still small. Voyager is so far away it takes 22.5 hours for the signal to get to earth traveling at light speed. This is a radio beam, not a laser, but it’s extraordinarily tight beam for a radio, with the focus only 0.5 degrees wide, but that means it’s still 1000x wider than the earth when it arrives. It’s being received by some of the biggest antennas ever made, but they’re still only 70m wide, so each one only receives a tiny fraction of the power the power transmitted. So, they’re decoding a signal that’s 10^-18 watts.

    So, not only are you debugging a system created half a century ago without being able to see or touch it, you’re doing it with a 2-day delay to see what your changes do, and using the most absurdly powerful radios just to send signals.

    The computer side of things is also even more impressive than this makes it sound. A memory chip failed. On Earth, you’d probably try to figure that out by physically looking at the hardware, and then probing it with a multimeter or an oscilloscope or something. They couldn’t do that. They had to debug it by watching the program as it ran and as it tried to use this faulty memory chip and failed in interesting ways. They could interact with it, but only on a 2 day delay. They also had to know that any wrong move and the little control they had over it could fail and it would be fully dead.

    So, a malfunctioning computer that you can only interact with at 40 bits per second, that takes 2 full days between every send and receive, that has flaky hardware and was designed more than 50 years ago.


  • I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all

    Exactly. If it passed the bar exam it’s because the correct solutions to the bar exam were in the training data.

    The other side can immediately tell that somebody has made an imitation without understanding the concept.

    No, they can’t. Just like people today think ChatGPT is intelligent despite it just being a fancy autocomplete. When it gets something obviously wrong they say those are “hallucinations”, but they don’t say they’re “hallucinations” when it happens to get things right, even though the process that produced those answers is identical. It’s just generating tokens that have a high likelihood of being the next word.

    People are also fooled by parrots all the time. That doesn’t mean a parrot understands what it’s saying, it just means that people are prone to believe something is intelligent even if there’s nothing there.

    ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things

    Sure, in theory. In practice people keep getting a way around those blocks. The reason it’s so easy to bypass them is that ChatGPT has no understanding of anything. That means it can’t be taught concepts, it has to be taught specific rules, and people can always find a loophole to exploit. Yes, after spending hundreds of millions of dollars on contractors in low-wage countries they think they’re getting better at blocking those off, but people keep finding new ways of exploiting a vulnerability.



  • Yeah, that’s basically the idea I was expressing.

    Except, the original idea is about “Understanding Chinese”, which is a bit vague. You could argue that right now the best translation programs “understand chinese”, at least enough to translate between Chinese and English. That is, they understand the rules of Chinese when it comes to subjects, verbs, objects, adverbs, adjectives, etc.

    The question is now whether they understand the concepts they’re translating.

    Like, imagine the Chinese government wanted to modify the program so that it was forbidden to talk about subjects that the Chinese government considered off-limits. I don’t think any current LLM could do that, because doing that requires understanding concepts. Sure, you could ban key words, but as attempts at Chinese censorship have shown over the years, people work around word bans all the time.

    That doesn’t mean that some future system won’t be able to understand concepts. It may have an LLM grafted onto it as a way to communicate with people. But, the LLM isn’t the part of the system that thinks about concepts. It’s the part of the system that generates plausible language. The concept-thinking part would be the part that did some prompt-engineering for the LLM so that the text the LLM generated matched the ideas it was trying to express.


  • The “learning” in a LLM is statistical information on sequences of words. There’s no learning of concepts or generalization.

    And what do you think language and words are for? To transport information.

    Yes, and humans used words for that and wrote it all down. Then a LLM came along, was force-fed all those words, and was able to imitate that by using big enough data sets. It’s like a parrot imitating the sound of someone’s voice. It can do it convincingly, but it has no concept of the content it’s using.

    How do you learn as a human when not from words?

    The words are merely the context for the learning for a human. If someone says “Don’t touch the stove, it’s hot” the important context is the stove, the pain of touching it, etc. If you feed an LLM 1000 scenarios involving the phrase “Don’t touch the stove, it’s hot”, it may be able to create unique dialogues containing those words, but it doesn’t actually understand pain or heat.

    We record knowledge in books, can talk about abstract concepts

    Yes, and those books are only useful for someone who has a lifetime of experience to be able to understand the concepts in the books. An LLM has no context, it can merely generate plausible books.

    Think of it this way. Say there’s a culture where instead of the written word, people wrote down history by weaving fabrics. When there was a death they’d make a certain pattern, when there was a war they’d use another pattern. A new birth would be shown with yet another pattern. A good harvest is yet another one, and so-on.

    Thousands of rugs from that culture are shipped to some guy in Europe, and he spends years studying them. He sees that pattern X often follows pattern Y, and that pattern Z only ever seems to appear following patterns R, S and T. After a while, he makes a fabric, and it’s shipped back to the people who originally made the weaves. They read a story of a great battle followed by lots of deaths, but surprisingly there followed great new births and years of great harvests. They figure that this stranger must understand how their system of recording events works. In reality, all it was was an imitation of the art he saw with no understanding of the meaning at all.

    That’s what’s happening with LLMs, but some people are dumb enough to believe there’s intention hidden in there.


  • That is to force it to form models about concepts.

    It can’t make models about concepts. It can only make models about what words tend to follow other words. It has no understanding of the underlying concepts.

    You can see that by asking them to apply their knowledge to something they haven’t seen before

    That can’t happen because they don’t have knowledge, they only have sequences of words.

    For example a cat is closer related to a dog than to a tractor.

    The only way ML models “understand” that is in terms of words or pixels. When they’re generating text related to cats, the words they’re generating are closer to the words related to dogs than the words related to tractors. When dealing with images, it’s the same basic idea. But, there’s no understanding there. They don’t get that cats and dogs are related.

    This is fundamentally different from how human minds work, where a baby learns that cats and dogs are similar before ever having a name for either of them.