Elon Musk’s FSD v12 demo includes a near miss at a red light and doxxing Mark Zuckerberg — 45-minute video was meant to demonstrate v12 of Tesla’s Full Self-Driving but ended up being a list of thi…::Elon Musk posted a 45-minute live demonstration of v12 of Tesla’s Full Self-Driving feature. During the video, Musk has to take control of the vehicle after it nearly runs a red light. He also doxxes Mark Zuckerberg.

  • Thorny_Thicket@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Well its an ongoing discussion with no definite answer but here’s how I see it:

    Let’s say a car manufacturer comes up with a self-driving vehicle that is proven to be, let’s say, 3 times better than a skilled human driver. It is then objectively true to say that everyone would be safer in one of these cars. You could even argue it’s the responsible thing to do, especially compared to driving by yourself, right?

    Well, maybe as a society, we don’t prohibit people from driving, but you must then acknowledge that if you cause an accident, you would also suffer the consequences. However, even these self-driving vehicles aren’t foolproof. Despite being 3 times safer, they will still end up in accidents. Who do we blame for this, then? That’s what I take you’re asking?

    No one, really, I guess. Assigning blame might not be the most productive thing to do, and it could be more reasonable to think of these accidents as a collective risk that users willingly accept when using these products. You’re already accepting that risk now, so taking a risk three times smaller shouldn’t be an issue. Perhaps it’s conceivable that the vehicle manufacturer pays some compensation to the victim/family too but not because it’s their fault per se, but because they can afford it and it seems like the fair thing to do.

    • batmaniam@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Fun conversation.

      I don’t think the statistics resolve the issue though. At the end of the day, you can’t give something agency without accountability. I guess it’s similar to a well behaved dog at a park that loses it and eats an old man or something. The statistics only matter so much: the owner introduced an unpredictable element with it’s own agency, you can’t hold a dog accountable so the owner inherits that responsibility.

      When I drive, I do accept a risk, but I do so knowing there are a set of rules everyone is following to minimize that risk, and that there’s accountability should someone choose not to follow them. I guess what I’m saying is that an autonomous vehicle reducing my risk by 3x, 100x, 1000x, doesn’t change the accountability for a single instance in which it got it wrong. Not when we’re talking about it knowingly and intentionally violating established traffic laws. That’s like saying a highly trained race car driver get’s off the hook for hitting someone while driving way to fast in public because, statistically, they’re actually much less of a risk to the public than most drivers.

      This is all assuming, by the way, that we’re talking about a well tested, well understood system. I think having vehicles on the road right now which are advertised as “full self driving”, when there are known issues, make a whole group of people of people directly responsible for any deaths that occur.