New research shows driverless car software is significantly more accurate with adults and light skinned people than children and dark-skinned people.

  • Endomlik@reddthat.com
    link
    fedilink
    arrow-up
    20
    arrow-down
    3
    ·
    1 year ago

    Seems this will be always the case. Small objects are harder to detect than larger objects. Higher contrast objects are easier to detect than lower contrast objects. Even if detection gets 1000x better, these cases will still be true. Do you introduce artificial error to make things fair?

    Repeating the same comment from a crosspost.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      1 year ago

      All the more reason to take this seriously and not disregard it as an implementation detail.

      When we, as a society, ask: Are autonomous vehicles safe enough yet?

      That’s not the whole question.

      …safe enough for whom?

      • Mac@mander.xyz
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Also what is the safety target? Humans are extremely unsafe. Are we looking for any improvement or are we looking for perfection?

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 year ago

          This is why it’s as much a question of philosophy as it is of engineering.

          Because there are things we care about besides quantitative measures.

          If you replace 100 pedestrian deaths due to drunk drivers with 99 pedestrian deaths due to unexplainable self-driving malfunctions… Is that, unambiguously, an improvement?

          I don’t know. In the aggregate, I guess I would have to say yes…?

          But when I imagine being that person in that moment, trying to make sense of the sudden loss of a loved one and having no explanation other than watershed segmentation and k-means clustering… I start to feel some existential vertigo.

          I worry that we’re sleepwalking into treating rationalist utilitarianism as the empirically correct moral model — because that’s the future that Silicon Valley is building, almost as if it’s inevitable.

          And it makes me wonder, like… How many of us are actually thinking it through and deliberately agreeing with them? Or are we all just boiled frogs here?