Tesla In 'Self-Drive Mode' Hit By Train After Turning Onto Train Tracks
-
This post did not contain any content.schrieb am 23. Juni 2025, 23:38 zuletzt editiert von
Honey, are those train tracks? .... Yes looks like we'll turn left on to the tracks for 1/2 a mile. Its a detour.
-
How so? The human in the car is always ultimately responsible when using level 3 driver assists. Tesla does not have level 4/5 self-driving and therefore doesn’t have to assume any liability.
schrieb am 23. Juni 2025, 23:59 zuletzt editiert vonIf you are monkeying with the car right before it crashes... wouldn't that raise suspicion?
-
It's been well documented. It lets them say in their statistics that the owner was in control of the car during the crash
schrieb am 23. Juni 2025, 23:59 zuletzt editiert vonThat's my whole point
-
Driver failed to control their car and avoid a collision.
FTFY.
I'm sure the car did actually take the action. But there are TONS of unavoidable warnings and reminders to the driver to supervise and take control when FSD goes wrong.
Which you can do by such super-technical means as "hitting the brake" or "steering the other way" or "flipping the right stalk up". Rocket science, I know.
Driver's fault. Bad technology, yes. Worse driver.
schrieb am 24. Juni 2025, 00:05 zuletzt editiert vonI'd have bailed out and waited for the insurance check. Then got a different car.
-
Driver failed to control their car and avoid a collision.
FTFY.
I'm sure the car did actually take the action. But there are TONS of unavoidable warnings and reminders to the driver to supervise and take control when FSD goes wrong.
Which you can do by such super-technical means as "hitting the brake" or "steering the other way" or "flipping the right stalk up". Rocket science, I know.
Driver's fault. Bad technology, yes. Worse driver.
schrieb am 24. Juni 2025, 00:06 zuletzt editiert von landedgentry@lemmy.zip::: spoiler spoiler
sdfsafsafsdaf
::: -
It simply saw a superior technology and decided to attack.
schrieb am 24. Juni 2025, 00:07 zuletzt editiert von landedgentry@lemmy.zip::: spoiler spoiler
sdfsafsafsdaf
::: -
This post did not contain any content.schrieb am 24. Juni 2025, 00:07 zuletzt editiert von@Davriellelouna I am sure it was all monitored in real time and a revised algorithm will be included in a future update.
-
I'd have bailed out and waited for the insurance check. Then got a different car.
schrieb am 24. Juni 2025, 00:14 zuletzt editiert vonI wouldn't have had time between pumping iron and getting head, myself.
-
Yes.
You hit the brake.
schrieb am 24. Juni 2025, 00:20 zuletzt editiert vonIdeally you hit the brakes before buyin the tesla.
-
This post did not contain any content.schrieb am 24. Juni 2025, 00:29 zuletzt editiert von
Also, the robotaxi has been live for all of a day and there's already footage of it driving on the wrong side of the road: https://www.youtube.com/watch?v=_s-h0YXtF0c&t=420s
-
Driver failed to control their car and avoid a collision.
FTFY.
I'm sure the car did actually take the action. But there are TONS of unavoidable warnings and reminders to the driver to supervise and take control when FSD goes wrong.
Which you can do by such super-technical means as "hitting the brake" or "steering the other way" or "flipping the right stalk up". Rocket science, I know.
Driver's fault. Bad technology, yes. Worse driver.
schrieb am 24. Juni 2025, 00:53 zuletzt editiert vonI just don't know how they're getting away with calling it 'full self driving' if it's not fully self driving.
-
This post did not contain any content.schrieb am 24. Juni 2025, 01:38 zuletzt editiert von
You all don't seem to understand, this is just the cost of progress!
-
This post did not contain any content.schrieb am 24. Juni 2025, 01:47 zuletzt editiert von
Hope no one was hurt,
regardless whether they're stupid, distracted or whatever!
If we can't build fail-saves into cars, what are our chances for real AI? -
This post did not contain any content.schrieb am 24. Juni 2025, 01:52 zuletzt editiert von
I am all in on testing these devices and improving waay before they recommend fully giving into AI
-
This post did not contain any content.schrieb am 24. Juni 2025, 01:56 zuletzt editiert von
Where's the video?
-
Honey, are those train tracks? .... Yes looks like we'll turn left on to the tracks for 1/2 a mile. Its a detour.
schrieb am 24. Juni 2025, 02:03 zuletzt editiert von nikkidimes@lemmy.worldYeaaaaah, I mean fuck Tesla for a variety of reasons, but right here we're looking at a car that drove itself onto a set of train tracks, continued down the train tracks, and the people inside did...nothing? Like, they grabbed their shit and got out when it got stuck. The car certainly should not have done this, but this isn't really a Tesla problem. It'll definitely be interesting when robotaxis follow suit though.
-
I just don't know how they're getting away with calling it 'full self driving' if it's not fully self driving.
schrieb am 24. Juni 2025, 02:15 zuletzt editiert vonI didn't keep track of how that lawsuit turned out.
That said, it is labeled "Full Self-Driving (Supervised)" on everything in the car.
Supervised as in you have to be ready to stop this kind of thing. That's what the supervision is.
-
Hope no one was hurt,
regardless whether they're stupid, distracted or whatever!
If we can't build fail-saves into cars, what are our chances for real AI?schrieb am 24. Juni 2025, 04:23 zuletzt editiert vonOkay I don't want to directly disagree with you I just want to add a thought experiment:
If it is a fundamental truth of the universe, a human can literally not program a computer to be smarter than a human (because of some Neil deGrasse Tyson-esq interpretation of entropy), then no matter what AI's will crash cars as often as real people.
And the question of who is responsible for the AI's actions will always be the person because people can take responsibility and AI's are just machine-tools. This basically means that there is a ceiling to how autonomous self-driving cars will ever be (because someone will have to sit at the controls and be ready to take over) and I think that is a good thing.
Honestly I'm in this camp that computers can never truly be "smarter" than a person in all respects. Maybe you can max out an ai's self-driving stats but then you'll have no points left over for morality, or you can balance the two out and it might just get into less morally challenging accidents more often ¯\_(ツ)_/¯. There are lots of ways to look at this
-
At my local commuter rail station the entrance to the parking lot is immediately next to the track. It’s easily within margin of error for GPS and if you’re only focusing immediately in front of you the pavement at the entrance probably look similar.
There are plenty of cues so don’t rolled shouldn’t be fooled but perhaps FSD wouldn’t pay attention to them since it’s a bit of an outlier.
That being said, I almost got my Subaru stuck once because an ATV trail looked like the dirt road to a campsite from the GPS, and I missed any cues there may have been
schrieb am 24. Juni 2025, 04:55 zuletzt editiert vonSounds reasonable to mix up dirt roads at a campsite. Idk why the other commenter had to be so uptight. I get the mixup in the lot if it's all paved and smooth, especially if say you make a left into the lot and the rail has a pedestrian crossing first. Shouldn't happen, but there's significant overlap in appearance of the ground. The average driver is amazingly inept, inattentive, and remorseless.
I'd be amused if your lot is the one I know of where the train pulls out of the station, makes a stop for the crosswalk, then proceeds to just one other station.
But the part of rail that's not paved between? That should always be identifiable as a train track. I can't understand when people just send it down the tracks. And yet, it still happens. Even at the station mentioned above where they pulled onto the 100mph section. Unreal.
-
This post did not contain any content.schrieb am 24. Juni 2025, 05:07 zuletzt editiert von
Elongated Musketon: UM THAT WAS JUST 1 FAULTY MODEL STOP CHERRY PICKING GUYS JUST BUY IT!!!1
-
Nyamerican Jacket – Where Modern America Meets Timeless Fashion
Technology134 vor 8 Tagenvor 8 Tagen2
-
-
OpenAI walks away from Scale AI — triggering industry-wide rethink of data partnerships
Technology 19. Juni 2025, 19:261
-
Google Play’s latest security change may break many Android apps for some power users. The Play Integrity API uses hardware-backed signals that are trickier for rooted devices and custom ROMs to pass.
Technology134 vor 25 Tagen28. Mai 2025, 11:421
-
VCs are starting to partner with private equity to buy up call centers, accounting firms and other "mature companies" to replace their operations with AI
Technology 25. Mai 2025, 15:451
-
-
Kids are short-circuiting their school-issued Chromebooks for TikTok clout
Technology 9. Mai 2025, 17:171
-
Decentralized Social Media Is the Only Alternative to the Tech Oligarchy
Technology 21. Jan. 2025, 17:511