this post was submitted on 09 Jul 2024
189 points (93.5% liked)

A Boring Dystopia

9509 readers
523 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kbin_space_program@kbin.run 4 points 2 months ago* (last edited 2 months ago) (2 children)

Well yes and no.

First off, ignoring the pitfalls of AI:
There is the issue at the core of the Trolley problem. Do you preserve the life of a loved one or several strangers?

This translates to: if you know the options when you're driving are:

  1. Drive over a cliff / into a semi / other guaranteed lethal thing for you and everyone in the car.
  2. Hit a stranger but you won't die.

What do you choose as a person?

Then, we have the issue of how to program a self diving car on that same problem. Does it value all life equally, or is it weighted to save the life of the immediate customer over all others?

Lastly, and really the likely core problem, is that modern AI aren't capable of full self driving, and the current core architecture will always have a knowledge gap, regardless of the size of the model. They can, 99% of the time, only do things that are in their data models. So if they don't recognize a human or obstacle, in all of the myriad forms we can take and move as, they will ignore it. The remaining 1% is hallucinations that end up being randomly beneficial. But, particularly for driving, if it's not in the model they can't do it.

[–] Transporter_Room_3@startrek.website 8 points 2 months ago (1 children)

We are not talking about a "what if" situation where it has to make a moral choice. We aren't talking about a car that decided to hit a person instead of a crowd. Unless this vehicle had no brakes, it doesn't matter.

It's a simple "if person, then stop" not "if person, stop unless the light is green"

A normal, rational human doesn't need a complex algorithm to decide to stop if little Stacy runs into the road after a ball at a zebra/crosswalk/intersection.

The ONLY consideration is "did they have enough time/space to avoid hitting the person"

[–] kbin_space_program@kbin.run 5 points 2 months ago* (last edited 2 months ago)

The problem is:
Define person.

A normal rational person does have a complex algorithm for stopping in that situation. Trick is that the calculation is subconscious, so we don't think it is complex.

Hell even just recognizing a human is so complex we have problems with it. It's why we can see faces in inanimate objects, and also why the uncanny valley is a thing.

I agree that stopping for people is of the utmost importance. Cars exist for transportation, and roads exist to move people, not cars. The problem is that from a software pov, ensuring you can define a person 100% of the time is still a post- doctorate research level issue. Self driving cars are not ready for open use yet, and anyone saying they are is either delusional or lying.

[–] veganpizza69@lemmy.world 2 points 2 months ago

!fuckcars@lemmy.world