Tesla In 'Self-Drive Mode' Hit By Train After Turning Onto Train Tracks
-
This post did not contain any content.schrieb am 24. Juni 2025, 01:52 zuletzt editiert von
I am all in on testing these devices and improving waay before they recommend fully giving into AI
-
This post did not contain any content.schrieb am 24. Juni 2025, 01:56 zuletzt editiert von
Where's the video?
-
Honey, are those train tracks? .... Yes looks like we'll turn left on to the tracks for 1/2 a mile. Its a detour.
schrieb am 24. Juni 2025, 02:03 zuletzt editiert von nikkidimes@lemmy.worldYeaaaaah, I mean fuck Tesla for a variety of reasons, but right here we're looking at a car that drove itself onto a set of train tracks, continued down the train tracks, and the people inside did...nothing? Like, they grabbed their shit and got out when it got stuck. The car certainly should not have done this, but this isn't really a Tesla problem. It'll definitely be interesting when robotaxis follow suit though.
-
I just don't know how they're getting away with calling it 'full self driving' if it's not fully self driving.
schrieb am 24. Juni 2025, 02:15 zuletzt editiert vonI didn't keep track of how that lawsuit turned out.
That said, it is labeled "Full Self-Driving (Supervised)" on everything in the car.
Supervised as in you have to be ready to stop this kind of thing. That's what the supervision is.
-
Hope no one was hurt,
regardless whether they're stupid, distracted or whatever!
If we can't build fail-saves into cars, what are our chances for real AI?schrieb am 24. Juni 2025, 04:23 zuletzt editiert vonOkay I don't want to directly disagree with you I just want to add a thought experiment:
If it is a fundamental truth of the universe, a human can literally not program a computer to be smarter than a human (because of some Neil deGrasse Tyson-esq interpretation of entropy), then no matter what AI's will crash cars as often as real people.
And the question of who is responsible for the AI's actions will always be the person because people can take responsibility and AI's are just machine-tools. This basically means that there is a ceiling to how autonomous self-driving cars will ever be (because someone will have to sit at the controls and be ready to take over) and I think that is a good thing.
Honestly I'm in this camp that computers can never truly be "smarter" than a person in all respects. Maybe you can max out an ai's self-driving stats but then you'll have no points left over for morality, or you can balance the two out and it might just get into less morally challenging accidents more often ¯\_(ツ)_/¯. There are lots of ways to look at this
-
At my local commuter rail station the entrance to the parking lot is immediately next to the track. It’s easily within margin of error for GPS and if you’re only focusing immediately in front of you the pavement at the entrance probably look similar.
There are plenty of cues so don’t rolled shouldn’t be fooled but perhaps FSD wouldn’t pay attention to them since it’s a bit of an outlier.
That being said, I almost got my Subaru stuck once because an ATV trail looked like the dirt road to a campsite from the GPS, and I missed any cues there may have been
schrieb am 24. Juni 2025, 04:55 zuletzt editiert vonSounds reasonable to mix up dirt roads at a campsite. Idk why the other commenter had to be so uptight. I get the mixup in the lot if it's all paved and smooth, especially if say you make a left into the lot and the rail has a pedestrian crossing first. Shouldn't happen, but there's significant overlap in appearance of the ground. The average driver is amazingly inept, inattentive, and remorseless.
I'd be amused if your lot is the one I know of where the train pulls out of the station, makes a stop for the crosswalk, then proceeds to just one other station.
But the part of rail that's not paved between? That should always be identifiable as a train track. I can't understand when people just send it down the tracks. And yet, it still happens. Even at the station mentioned above where they pulled onto the 100mph section. Unreal.
-
This post did not contain any content.schrieb am 24. Juni 2025, 05:07 zuletzt editiert von
Elongated Musketon: UM THAT WAS JUST 1 FAULTY MODEL STOP CHERRY PICKING GUYS JUST BUY IT!!!1
-
This post did not contain any content.schrieb am 24. Juni 2025, 05:32 zuletzt editiert von
Tesla's have a problem with the lefts.
-
Elongated Musketon: UM THAT WAS JUST 1 FAULTY MODEL STOP CHERRY PICKING GUYS JUST BUY IT!!!1
schrieb am 24. Juni 2025, 05:58 zuletzt editiert vonClearly the train didn't yield properly, time to ban trains.
-
@Davriellelouna I am sure it was all monitored in real time and a revised algorithm will be included in a future update.schrieb am 24. Juni 2025, 06:05 zuletzt editiert von
And maybe an update to the firmware on those expensive Logitech webcams powering the AI.
-
I'm not sure I'd be able to sleep through driving on the railroad tracks. I'm going to guess this person was simply incredibly fucking stupid, and thought the car would figure it out, instead of doing the bare fucking minimum of driving their goddamn 2 ton heavy death machine themself.
schrieb am 24. Juni 2025, 06:06 zuletzt editiert vonI’m going to guess this person was simply incredibly fucking stupid
Well, the guy owned a Tesla, it was pretty obvious.
-
Okay I don't want to directly disagree with you I just want to add a thought experiment:
If it is a fundamental truth of the universe, a human can literally not program a computer to be smarter than a human (because of some Neil deGrasse Tyson-esq interpretation of entropy), then no matter what AI's will crash cars as often as real people.
And the question of who is responsible for the AI's actions will always be the person because people can take responsibility and AI's are just machine-tools. This basically means that there is a ceiling to how autonomous self-driving cars will ever be (because someone will have to sit at the controls and be ready to take over) and I think that is a good thing.
Honestly I'm in this camp that computers can never truly be "smarter" than a person in all respects. Maybe you can max out an ai's self-driving stats but then you'll have no points left over for morality, or you can balance the two out and it might just get into less morally challenging accidents more often ¯\_(ツ)_/¯. There are lots of ways to look at this
schrieb am 24. Juni 2025, 07:31 zuletzt editiert vona human can literally not program a computer to be smarter than a human
I'd add that a computer vision system can't integrate new information as quickly as a human, especially when limited to vision-only sensing - which Tesla is strangely obsessed with when the cost of these sensors is dropping and their utility has been proven by waymo's excellent record.
All in all, I see no reason to attempt to replace humans when we have billions. This is doubly so for 'artistic' ai purposes - we have billions of people, let artists create the art.
show me an AI driven system that can clean my kitchen, or do my laundry. that'd be WORTH it.
-
If you’re about to be hit by a train, driving forward through the barrier is always the correct choice. It will move out of the way and you stay alive to fix the scratches in your paint.
schrieb am 24. Juni 2025, 07:51 zuletzt editiert von ledericas@lemm.eenot if your in a tesslar.
-
Elongated Musketon: UM THAT WAS JUST 1 FAULTY MODEL STOP CHERRY PICKING GUYS JUST BUY IT!!!1
schrieb am 24. Juni 2025, 07:56 zuletzt editiert vonFor as much as I’d like to see Tesla stock crash these days, and without judging on the whole autonomous car topic, this IS cherrypicking.
Human drivers aren’t exactly flawless either, but we won’t ban human driven cars because some acts recklessly or other had a seizure while driving.
If statistically self driving cars are safer, I’d rather have them and reduce the risk of coming across another reckless driver.
-
This post did not contain any content.schrieb am 24. Juni 2025, 11:15 zuletzt editiert von
Tesla's new automatic suicide feature
-
Tesla's new automatic suicide feature
schrieb am 24. Juni 2025, 14:06 zuletzt editiert vonI think that's called "murder"
-
Clearly the train didn't yield properly, time to ban trains.
schrieb am 24. Juni 2025, 14:06 zuletzt editiert vonI mean, he did specifically come up with his idiotic "Hyperloop" concept to kill California's high speed rail project
-
This post did not contain any content.schrieb am 24. Juni 2025, 14:15 zuletzt editiert von
How could the left do this /s
-
How could the left do this /s
schrieb am 24. Juni 2025, 14:23 zuletzt editiert vonNext up: "Train is a communistic tool to restrict vehicular freedom. Banish trains! More highways!"
-
Also, the robotaxi has been live for all of a day and there's already footage of it driving on the wrong side of the road: https://www.youtube.com/watch?v=_s-h0YXtF0c&t=420s
schrieb am 24. Juni 2025, 14:31 zuletzt editiert vonThe thing that strikes me about both this story and the thing you posted is that the people in the Tesla seem to be like "this is fine" as the car does some pretty terrible stuff.
In that one, Tesla failing to honor a forced left turn instead opting to go straight into oncoming lanes and waggle about causing things to honk at them, the human just sits there without trying to intervene. Meanwhile they describe it as "navigation issue/hesitation" which really understates what happened there.
The train one didn't come with video, but I can't imagine just letting my car turn itself onto tracks and going 40 feet without thinking.
My Ford even thinks about going too close to another lane and I'm intervening even if it was really going to be no big deal. I can't imagine this level of "oh well".
Tesla drivers/riders are really nuts...
-
📢 Beta Testers Wanted – DNA Companion (Bioinformatics Android Tool)
Technology134 vor 4 Tagenvor 4 Tagen2
-
Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source
Technology134 vor 3 Tagenvor 6 Tagen1
-
After an 11-month strike, Video game actors are voting on a new contract. Here’s what it means for AI in gaming
Technology 9. Juli 2025, 10:541
-
Why Denmark is dumping Microsoft Office and Windows for LibreOffice and Linux
Technology 12. Juni 2025, 04:501
-
A judge set the timeline for the Amazon antitrust trial, which starts on February 9, 2027
Technology 9. Juni 2025, 15:541
-
-
Xinbi: The $8 Billion Colorado-Incorporated Marketplace for Pig-Butchering Scammers and North Korean Hackers
Technology 13. Mai 2025, 20:181
-
Gig Companies Violate Workers’ Rights: Amazon Flex, DoorDash, Favor, Instacart, Lyft, Shipt, and Uber claim to offer workers flexibility but end up paying them less than state or local minimum wages.
Technology 12. Mai 2025, 13:111