New Footage Shows Tesla On Autopilot Crashing Into Police Car After Alerting Driver 150 Times::Six officers who were injured in the crash are suing Tesla despite the fact that the driver was allegedly impaired

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    I’m not so sure disengaging autopilot because the driver’s hands were not on the wheel while on a highway, is the best option. Engage hazard lights, remain in lane (or if able move to the slowest lane) and come to a stop. Surely that’s the better way?

    Just disengaging the autopilot seems like such a copout to me. Also the fact it disengaged right at the end “The driver was in control at the moment of the crash” just again feels like bad “self” driving. Especially when the so-called self-driving is able to come to a stop as part of its software in other situations.

    Also if you cannot recognize an emergency vehicle (I wonder if this was a combination of the haze and the usually bright emergency lights saturating the image it was trying to analyse) it’s again a sign you shouldn’t be releasing this to the public. It’s clearly just not ready.

    Not taking any responsibility away from the human driver here. I just don’t think the behaviour was good enough for software controlling a car used by the public.

    Not to mention, of course, the reason for suing Tesla isn’t because they think they’re more liable. It’s because they can actually get some money from them.

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        That’s not the main problem. It is more like an excuse. The main problem has been explained in the video right before that:

        Their radar is bad at recognizing immobile cars on the road. This means all objects. All obstacles on your road!

        The emergency vehicles just happen to be your most frequent kind of obstacles.

        The fallback to the camera is a bad excuse anyway, because radar is needed first to detect any obstacles. The cam will usually be later (=at closer distance) than the radar.

        The even better solution (Trigger warning: nerdy stuff incoming) is to always mix all results of all kinds of sensors at an early stage in the processing software. That’s what european car makers do right from the beginning, but Tesla is way behind with their engineering. Their sensors still work indepently, and each does their own processing. So every shortcoming of one sensor creates a faulty detection result that has to be covered later (read: seconds later, not milliseconds) by other kinds of sensors.

        • Blaidd@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Their radar is bad at recognizing immobile cars on the road. This means all objects. All obstacles on your road!

          Teslas don’t use radar, just cameras. That’s why Teslas crash at way higher rates than real self driving cars like Waymo.

        • r00ty@kbin.life
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Their radar is bad at recognizing immobile cars on the road. This means all objects. All obstacles on your road!

          I feel like this is bad tech understanding in journalism (which is hardly new). There’s no reason radar couldn’t see stationary vehicles. In fact, very specifically, they’re NOT stationary relative to the radar transceiver. Radar would see them no problem.

          My actual suspicion here is that Tesla actively ignores stationary vehicles (it can know they’re stationary by adding its known speed to the relative speed) not in front of the vehicle. Now, in normal streets this makes sense (or at least those on the non-driver’s side). Do you pay attention to every car parked by the side of the road when driving? You’re maybe looking for signs of movement, or lights on, etc. But you’re not tracking them all, and neither will the autopilot. However, on a highway if you have more than 1 vehicle on the shoulder every now and then it should be making you wonder what else is ahead (and I’d argue a single car on the shoulder is a risk to keep watch on). A long line of them should definitely make you slow down.

          I think Human drivers would do this, and I think an autopilot should be considering what kind of road it is on, and whether it should treat scenarios different.

          I also have another suspicion, but it’s just a thought. If this Tesla was really using radar as well as cameras, haze or not, it should have seen that stationary vehicle further ahead than it did. Since newer Tesla cars don’t have radar, and coming from a software development background, I can actually see a logical (in terms of corporate thinking) reason to remove the code for radar. They would do this simply because they will not want to maintain it if they have no plans to return to radar. Think of it like this. After a few versions of augmenting the camera detection logic, it is unlikely to work with the existing radar logic. Do they spend the time to make them work together for the older vehicles, or only allow camera based AI on newer software versions? I would suspect the latter would be the business decision.

      • r00ty@kbin.life
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The question here is, could you see there was a reason to stop the car significantly (more than 3 seconds) before the autopilot did? If we can recognize it through the haze the autopilot must too.

        Moreover, it needs to now be extra good at spotting vehicles in bad lighting conditions because other sensors are removed on newer Teslas. It only has cameras to go on.