Skip to content

Tesla loses Autopilot wrongful death case in $329 million verdict

Technology
172 96 1
  • Holding them accountable would be jail time. I'm fine with even putting the salesman in jail for this. Who's gonna sell your vehicles when they know there's a decent chance of them taking the blame for your shitty tech?

    You'd have to prove that the salesman said exactly that, and without a record it's at best a he said / she said situation.

    I'd be happy to see Musk jailed though, he's definitely taunted self driving as fully functional.

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    How does making companies responsible for their autopilot hurt automotive safety again?

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    Seems like jury verdicts don't set a legal precedent in the US but still often considered to have persuasive impact on future cases.

    This kinda makes sense but the articles on this don't make it very clear how impactful this actually is - here crossing fingers for Tesla's down fall. I'd imagine launching robo taxis would be even harder now.

    It's funny how this legal bottle neck was the first thing AI driving industry research ran into. Then, we kinda collectively forgot that and now it seems like it actually was as important as we thought it would be. Let's say once robo taxis scale up - there would be thousands of these every year just due sheer scale of driving. How could that ever work outside of places like China?

  • Seems like jury verdicts don't set a legal precedent in the US but still often considered to have persuasive impact on future cases.

    This kinda makes sense but the articles on this don't make it very clear how impactful this actually is - here crossing fingers for Tesla's down fall. I'd imagine launching robo taxis would be even harder now.

    It's funny how this legal bottle neck was the first thing AI driving industry research ran into. Then, we kinda collectively forgot that and now it seems like it actually was as important as we thought it would be. Let's say once robo taxis scale up - there would be thousands of these every year just due sheer scale of driving. How could that ever work outside of places like China?

    What jury results do is cost real money - companies often (not always) change in hopes to avoid more.

  • What jury results do is cost real money - companies often (not always) change in hopes to avoid more.

    Yeah but also how would this work at full driving scale. If 1,000 cases and 100 are settled for 0.3 billion that's already 30 billion a year, almost a quarter of Tesla's yearly revenue. Then in addition, consider the overhead of insurance fraud etc. It seems like it would be completely legally unsustainable unless we do "human life costs X number of money, next".

    I genuinely think we'll be stuck with humans for a long time outside of highly controlled city rides like Wayno where the cars are limited to 40km hour which makes it very difficult to kill anyone either way.

  • Yeah but also how would this work at full driving scale. If 1,000 cases and 100 are settled for 0.3 billion that's already 30 billion a year, almost a quarter of Tesla's yearly revenue. Then in addition, consider the overhead of insurance fraud etc. It seems like it would be completely legally unsustainable unless we do "human life costs X number of money, next".

    I genuinely think we'll be stuck with humans for a long time outside of highly controlled city rides like Wayno where the cars are limited to 40km hour which makes it very difficult to kill anyone either way.

    We have numbers already from all the human drivers caused death. Once someone makes self driving safer than humans (remember drinkingiisia factor in many human driver caused deaths and so non-drinkers will demand this be accountee for.

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    I'm kinda torn on this - in principle, not this specific case. If your AI performs on paar with an average human and there is no known flaw at fault, I think you shouldn't be either.

  • I feel like calling it AutoPilot is already risking liability, Full Self Driving is just audacious. There's a reason other companies with similar technology have gone with things like driving assistance. This has probably had lawyers at Tesla sweating bullets for years.

    I feel like calling it AutoPilot is already risking liability,

    From an aviation point of view, Autopilot is pretty accurate to the original aviation reference. The original aviation autopilot released in 1912 for aircraft would simply hold an aircraft at specified heading and altitude without human input where it would operate the aircraft's control surfaces to keep it on its directed path. However, if you were at an altitude that would let you fly into a mountain, autopilot would do exactly that. So the current Tesla Autopilot is pretty close to that level of functionality with the added feature of maintaining a set speed too. Note, modern aviation autopilot is much more functional in that it can even land and takeoff airplanes for specific models

    Full Self Driving is just audacious. There’s a reason other companies with similar technology have gone with things like driving assistance. This has probably had lawyers at Tesla sweating bullets for years.

    I agree. I think Musk always intended FSD to live up to the name, and perhaps named it that aspirationally, which is all well and good, but most consumers don't share that mindset and if you call it that right now, they assume it has that functionality when they buy it today which it doesn't. I agree with you that it was a legal liability waiting to happen.

  • Did the car try to stop and fail to do so in time due to the speeding, or did the car not try despite expected collision detection behavior?

    Going off of OP's quote, the jury found the driver responsible but Tesla is found liable, which is pretty confusing. It might make some sense if expected autopilot functionality despite the drivers foot being on the pedal didn't work.

    Did the car try to stop and fail to do so in time due to the speeding, or did the car not try despite expected collision detection behavior?

    From the article, it looks like the car didn't even try to stop because the driver was overridden by the acceleration because the driver had their foot pressed on the pedal (which isn't normal during autopilot use).

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's

    Good!

    ... and the entire industry

    Even better!

  • I'm kinda torn on this - in principle, not this specific case. If your AI performs on paar with an average human and there is no known flaw at fault, I think you shouldn't be either.

    I think the problem is that for a long time Tesla, and specifically Elon, went around telling everyone how great their autopilot was. Turns out that was all exaggeration and sometimes flat out lying.

    They showed videos of the car driving on its own. Later, we found out it was actually being controlled remotely.

    Yeah, the driver wasn’t operating the vehicle safely but, Tesla told him that he didn’t have to.

  • I'm kinda torn on this - in principle, not this specific case. If your AI performs on paar with an average human and there is no known flaw at fault, I think you shouldn't be either.

    And that is the point, Tesla's "AI" performs nowhere near human levels. Actual full self driving levels is on 5 scales where Tesla's are around level 2 out of those 5.

    Tesla claimed they have full self driving for since about a decade or so, and it has been and continues to be a complwte lie. Musk claimed since long ago that he can drive a Tesla autonomously from LA to NY while in reality it has trouble leaving the first parking lot.

    I'm unsure of and how much has changed there but since Elmo Musk spends more time lying about everything than actually improving his products, I would not hold my breath.

  • Not to defend Tesla here, but how does the technology become "good and well ready" for road testing if you're not allowed to test it on the road? There are a million different driving environments in the US, so it'd be impossible to test all these scenarios without a real-world environment.

    How about fucking not claiming it's FSD and just have ACC and lane keep and then collect data and train on that? Also closed circuit and test there.

  • Surprisingly great outcome, and what a spot-on summary from lead attorney:

    "Tesla designed autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans," said Brett Schreiber, lead attorney for the plaintiffs. "Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm's way. Today's verdict represents justice for Naibel's tragic death and Dillon's lifelong injuries, holding Tesla and Musk accountable for propping up the company’s trillion-dollar valuation with self-driving hype at the expense of human lives," Schreiber said.

    You understand that this is only happening because of how Elon lost good graces with Trump right? If they were still "bros" this would have been swept under the rug, since Trumps administration controls most, if not all high judges in the US.

  • I'm kinda torn on this - in principle, not this specific case. If your AI performs on paar with an average human and there is no known flaw at fault, I think you shouldn't be either.

    I think that's a bad idea, both legally and ethically. Vehicles cause tens of thousands of deaths - not to mention injuries - per year in North America. You're proposing that a company who can meet that standard is absolved of liability? Meet, not improve.

    In that case, you've given these companies license to literally make money off of removing responsibility for those deaths. The driver's not responsible, and neither is the company. That seems pretty terrible to me, and I'm sure to the loved ones of anyone who has been killed in a vehicle collision.

  • Well, the Obama administration had published initial guidance on testing and safety for automated vehicles in September 2016, which was pre-regulatory but a prelude to potential regulation. Trump trashed it as one of the first things he did taking office for his first term. I was working in the AV industry at the time.

    That turned everything into the wild west for a couple of years, up until an automated Uber killed a pedestrian in Arizona in 2018. After that, most AV companies scaled public testing way back, and deployed extremely conservative versions of their software. If you look at news articles from that time, there's a lot of criticism of how, e.g., Waymos would just grind to a halt in the middle of intersections, as companies would rather take flak for blocking traffic than running over people.

    But not Tesla. While other companies dialed back their ambitions, Tesla was ripping Lidar sensors off its vehicles and sending them back out on public roads in droves. They also continued to market the technology - first as "Autopilot" and later as "Full Self Driving" - in ways that vastly overstated its capabilities. To be clear, Full Self Driving, or Level 5 Automation in the SAE framework, is science fiction at this point, the idea of a computer system functionally indistinguishable from a capable human driver. Other AV companies are still striving for Level 4 automation, which may include geographic restrictions or limitations to functioning on certain types of road infrastructure.

    Part of the blame probably also lies with Biden, whose DOT had the opportunity to address this and didn't during his term. But it was Trump who initially trashed the safety framework, and Telsa that concealed and mismarketed the limitations of its technology.

    You got me interested, so I searched around and found this:

    So, if I understand this correctly, the only fundamental difference between level 4 and 5 is that 4 works on specific known road types with reliable quality (highways, city roads), while level 5 works literally everywhere, including rural dirt paths?

    I'm trying to imagine what other type of geographic difference there might be between 4 and 5 and I'm drawing a blank.

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    This is gonna get overturned on appeal.

    The guy dropped his phone and was fiddling for it AND had his foot pressing down the accelerator.

    Pressing your foot on it overrides any braking, it even tells you it won't brake while doing it. That's how it should be, the driver should always be able to override these things in case of emergency.

    Maybe if he hadn't done that (edit held the accelerator down) it'd stick.

  • Brannigan is way smarter than Mush.

    Some of you will be forced through a fine mesh screen for your country. They will be the luckiest of all.

  • Don't take my post as a defense of Tesla even if there is blame on both sides here. However, I lay the huge majority of it on Tesla marketing.

    I had to find two other articles to figure out if the system being used here was Tesla's free included AutoPilot, or the more advanced paid (one time fee/subscription) version called Full Self Drive (FSD). The answer for this case was: Autopilot.

    There are many important distinctions between the two systems. However Tesla frequently conflates the two together when speaking about autonomous technology for their cars, so I blame Tesla. What was required here to avoid these deaths actually has very little to do with autonomous technology as most know it, and instead talking about Collision Avoidance Systems. Only in 2024 was the first talk about requiring Collision Avoidance Systems in new vehicles in the USA. source The cars that include it now (Tesla and some other models from other brands) do so on their own without a legal mandate.

    Tesla claims that the Collision Avoidance Systems would have been overridden anyway because the driver was holding on the accelerator (which is not normal under Autopilot or FSD conditions). Even if that's true, Tesla has positioned its cars as being highly autonomous, and often times doesn't call out that that skilled autonomy only comes in the Full Self Drive paid upgrade or subscription.

    So I DO blame Tesla, even if the driver contributed to the accident.

    FSD wasn't even available (edit to use) in 2019. It was a future purchase add on that only went into very limited invite only beta in 2020.

    In 2019 there was much less confusion on the topic.

  • Did the car try to stop and fail to do so in time due to the speeding, or did the car not try despite expected collision detection behavior?

    From the article, it looks like the car didn't even try to stop because the driver was overridden by the acceleration because the driver had their foot pressed on the pedal (which isn't normal during autopilot use).

    This is correct. And when you do this, the car tells you it won't brake.

  • 550 Stimmen
    102 Beiträge
    1k Aufrufe
    lechekaflan@lemmy.worldL
    Not surprising it's already ahead, as about 20 years ago they offered 100mbps to anyone who could pay for it (a certain Danny Choo comes to mind).
  • Fullstack Engineer - Waifus (for people looking for a job)

    Technology technology
    10
    1
    58 Stimmen
    10 Beiträge
    134 Aufrufe
    ulrich@feddit.orgU
    Annual Salary Range $180,000 - $440,000 USD I mean I'd be "obsessed" for that kind of money...
  • 337 Stimmen
    19 Beiträge
    180 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • 18 Stimmen
    3 Beiträge
    35 Aufrufe
    A
    This isn't the Cthulhu universe. There isn't some horrible truth ChatGPT can reveal to you which will literally drive you insane. Some people use ChatGPT a lot, some people have psychotic episodes, and there's going to be enough overlap to write sensationalist stories even if there's no causative relationship. I suppose ChatGPT might be harmful to someone who is already delusional by (after pressure) expressing agreement, but I'm not sure about that because as far as I know, you can't talk a person into or out of psychosis.
  • Blocking real-world ads: is the future here?

    Technology technology
    33
    1
    198 Stimmen
    33 Beiträge
    686 Aufrufe
    S
    Also a work of fiction
  • 0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • SpaceX's Starship blows up ahead of 10th test flight

    Technology technology
    165
    1
    610 Stimmen
    165 Beiträge
    4k Aufrufe
    mycodesucks@lemmy.worldM
    In this case you happen to be right on both counts.
  • The bots are among us.

    Technology technology
    3
    2
    0 Stimmen
    3 Beiträge
    36 Aufrufe
    yerbouti@sh.itjust.worksY
    Yeah she was on to something with the layers, but screw it up. I’m sure the models got better since.