Skip to content

Tesla loses Autopilot wrongful death case in $329 million verdict

Technology
150 82 0
  • Surprisingly great outcome, and what a spot-on summary from lead attorney:

    "Tesla designed autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans," said Brett Schreiber, lead attorney for the plaintiffs. "Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm's way. Today's verdict represents justice for Naibel's tragic death and Dillon's lifelong injuries, holding Tesla and Musk accountable for propping up the company’s trillion-dollar valuation with self-driving hype at the expense of human lives," Schreiber said.

    You understand that this is only happening because of how Elon lost good graces with Trump right? If they were still "bros" this would have been swept under the rug, since Trumps administration controls most, if not all high judges in the US.

  • I'm kinda torn on this - in principle, not this specific case. If your AI performs on paar with an average human and there is no known flaw at fault, I think you shouldn't be either.

    I think that's a bad idea, both legally and ethically. Vehicles cause tens of thousands of deaths - not to mention injuries - per year in North America. You're proposing that a company who can meet that standard is absolved of liability? Meet, not improve.

    In that case, you've given these companies license to literally make money off of removing responsibility for those deaths. The driver's not responsible, and neither is the company. That seems pretty terrible to me, and I'm sure to the loved ones of anyone who has been killed in a vehicle collision.

  • Well, the Obama administration had published initial guidance on testing and safety for automated vehicles in September 2016, which was pre-regulatory but a prelude to potential regulation. Trump trashed it as one of the first things he did taking office for his first term. I was working in the AV industry at the time.

    That turned everything into the wild west for a couple of years, up until an automated Uber killed a pedestrian in Arizona in 2018. After that, most AV companies scaled public testing way back, and deployed extremely conservative versions of their software. If you look at news articles from that time, there's a lot of criticism of how, e.g., Waymos would just grind to a halt in the middle of intersections, as companies would rather take flak for blocking traffic than running over people.

    But not Tesla. While other companies dialed back their ambitions, Tesla was ripping Lidar sensors off its vehicles and sending them back out on public roads in droves. They also continued to market the technology - first as "Autopilot" and later as "Full Self Driving" - in ways that vastly overstated its capabilities. To be clear, Full Self Driving, or Level 5 Automation in the SAE framework, is science fiction at this point, the idea of a computer system functionally indistinguishable from a capable human driver. Other AV companies are still striving for Level 4 automation, which may include geographic restrictions or limitations to functioning on certain types of road infrastructure.

    Part of the blame probably also lies with Biden, whose DOT had the opportunity to address this and didn't during his term. But it was Trump who initially trashed the safety framework, and Telsa that concealed and mismarketed the limitations of its technology.

    You got me interested, so I searched around and found this:

    So, if I understand this correctly, the only fundamental difference between level 4 and 5 is that 4 works on specific known road types with reliable quality (highways, city roads), while level 5 works literally everywhere, including rural dirt paths?

    I'm trying to imagine what other type of geographic difference there might be between 4 and 5 and I'm drawing a blank.

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    This is gonna get overturned on appeal.

    The guy dropped his phone and was fiddling for it AND had his foot pressing down the accelerator.

    Pressing your foot on it overrides any braking, it even tells you it won't brake while doing it. That's how it should be, the driver should always be able to override these things in case of emergency.

    Maybe if he hadn't done that (edit held the accelerator down) it'd stick.

  • Brannigan is way smarter than Mush.

    Some of you will be forced through a fine mesh screen for your country. They will be the luckiest of all.

  • Don't take my post as a defense of Tesla even if there is blame on both sides here. However, I lay the huge majority of it on Tesla marketing.

    I had to find two other articles to figure out if the system being used here was Tesla's free included AutoPilot, or the more advanced paid (one time fee/subscription) version called Full Self Drive (FSD). The answer for this case was: Autopilot.

    There are many important distinctions between the two systems. However Tesla frequently conflates the two together when speaking about autonomous technology for their cars, so I blame Tesla. What was required here to avoid these deaths actually has very little to do with autonomous technology as most know it, and instead talking about Collision Avoidance Systems. Only in 2024 was the first talk about requiring Collision Avoidance Systems in new vehicles in the USA. source The cars that include it now (Tesla and some other models from other brands) do so on their own without a legal mandate.

    Tesla claims that the Collision Avoidance Systems would have been overridden anyway because the driver was holding on the accelerator (which is not normal under Autopilot or FSD conditions). Even if that's true, Tesla has positioned its cars as being highly autonomous, and often times doesn't call out that that skilled autonomy only comes in the Full Self Drive paid upgrade or subscription.

    So I DO blame Tesla, even if the driver contributed to the accident.

    FSD wasn't even available (edit to use) in 2019. It was a future purchase add on that only went into very limited invite only beta in 2020.

    In 2019 there was much less confusion on the topic.

  • Did the car try to stop and fail to do so in time due to the speeding, or did the car not try despite expected collision detection behavior?

    From the article, it looks like the car didn't even try to stop because the driver was overridden by the acceleration because the driver had their foot pressed on the pedal (which isn't normal during autopilot use).

    This is correct. And when you do this, the car tells you it won't brake.

  • Brannigan is way smarter than Mush.

    Farquaad said this, not Brannigan iirc

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    That's a tough one. Yeah they sell it as autopilot. But anyone seeing a steering wheel and pedals should reasonably assume that they are there to override the autopilot. Saying he thought the car would protect him from his mistake doesn't sound like something an autopilot would do.
    Tesla has done plenty wrong, but this case isn't much of an example of that.

  • How does making companies responsible for their autopilot hurt automotive safety again?

    There's actually a backfire effect here. It could make companies too cautious in rolling out self driving. The status quo is people driving poorly. If you delay the roll out of self driving beyond the point when it's better than people, then more people will die.

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    I wonder if a lawyer will ever try to apply this as precedent against Boeing or similar...

  • You got me interested, so I searched around and found this:

    So, if I understand this correctly, the only fundamental difference between level 4 and 5 is that 4 works on specific known road types with reliable quality (highways, city roads), while level 5 works literally everywhere, including rural dirt paths?

    I'm trying to imagine what other type of geographic difference there might be between 4 and 5 and I'm drawing a blank.

    Yes, that's it. A lot of AV systems are dependent on high resolution 3d maps of an area so they can precisely locate themselves in space. So they may perform relatively well in that defined space but would not be able to do so outside it.

    Level 5 is functionally a human driver. You as a human could be driving off road, in an environment you've never been in before. Maybe it's raining and muddy. Maybe there are unknown hazards within this novel geography, flooding, fallen trees, etc.

    A Level 5 AV system would be able to perform equivalently to a human in those conditions. Again, it's science fiction at this point, but essentially the end goal of vehicle automation is a system that can respond to novel and unpredictable circumstances in the same way (or better than) a human driver would in that scenario. It's really not defined much better than that end goal - because it's not possible with current technology, it doesn't correspond to a specific set of sensors or software system. It's a performance-based, long-term goal.

    This is why it's so irresponsible for Tesla to continue to market their system as "Full self driving." It is nowhere near as adaptable or capable as a human driver. They pretend or insinuate that they have a system equivalent to SAE Level 5 when the entire industry is a decade minimum away from such a system.

  • There's actually a backfire effect here. It could make companies too cautious in rolling out self driving. The status quo is people driving poorly. If you delay the roll out of self driving beyond the point when it's better than people, then more people will die.

    Even if self driving cars kill less people, they'll still destroy our quality of life.

  • A representative for Tesla sent Ars the following statement: "Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial. Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator—which overrode Autopilot—as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash. This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver—from day one—admitted and accepted responsibility."

    So, you admit that the company’s marketing has continued to lie for the past six years?

    So if this guy killed an entire family but survived in this accident instead, would the judge blame fucking tesla autopilot and let him go free?
    I might as well sue the catholic church because Jesus did not take the wheel when I closed my eyes while driving and prayed really hard!

    The details of the accidant too of him accelerating and turning while on autopilot. Not even today does any car have a fully autonomous autopilot driving system that works in all cities or roads, and this was in 2019.

    did Elon fuck the judge wife and then his entire family left him for it? wtf is $330 millions for wrong crash accident anyway?

  • So if this guy killed an entire family but survived in this accident instead, would the judge blame fucking tesla autopilot and let him go free?
    I might as well sue the catholic church because Jesus did not take the wheel when I closed my eyes while driving and prayed really hard!

    The details of the accidant too of him accelerating and turning while on autopilot. Not even today does any car have a fully autonomous autopilot driving system that works in all cities or roads, and this was in 2019.

    did Elon fuck the judge wife and then his entire family left him for it? wtf is $330 millions for wrong crash accident anyway?

    If Tesla promises and doesn't deliver, they pay. That's the price of doing business when lives are on the line.

  • That's a tough one. Yeah they sell it as autopilot. But anyone seeing a steering wheel and pedals should reasonably assume that they are there to override the autopilot. Saying he thought the car would protect him from his mistake doesn't sound like something an autopilot would do.
    Tesla has done plenty wrong, but this case isn't much of an example of that.

    More than one person can be at fault, my friend. Don't lie about your product and expect no consequences.

  • This is gonna get overturned on appeal.

    The guy dropped his phone and was fiddling for it AND had his foot pressing down the accelerator.

    Pressing your foot on it overrides any braking, it even tells you it won't brake while doing it. That's how it should be, the driver should always be able to override these things in case of emergency.

    Maybe if he hadn't done that (edit held the accelerator down) it'd stick.

    On what grounds? Only certain things can be appealed, not "you're wrong" gut feelings.

  • More than one person can be at fault, my friend. Don't lie about your product and expect no consequences.

    I don't know. If it is possible to override the autopilot then it's a pretty good bet that putting your foot on the accelerator would do it. It's hard to really imagine this scenario where that wouldn't result in the car going into manual mode. Surely would be more dangerous if you couldn't override the autopilot.

  • Today's verdict is wrong and only works to set back automotive safety and jeopardize Tesla's

    Good!

    ... and the entire industry

    Even better!

    Did you read it tho? Tesla is at fault for this guy overriding the safety systems by pushing down on the accelerator and looking for his phone at the same time?

    I do not agree with Tesla often. Their marketing is bullshit, their cars are low quality pieces of shit. But I don't think they should be held liable for THIS idiot's driving. They should still be held liable when Autopilot itself fucks up.

  • On what grounds? Only certain things can be appealed, not "you're wrong" gut feelings.

    Well, their lawyers stated "We plan to appeal given the substantial errors of law and irregularities at trial"

    They can also appeal the actual awards separately as being disproportionate. The amount is pretty ridiculous given the circumstances even if the guilty verdict stands.

    There was some racial discrimination suit Tesla lost, and the guy was awarded 137 million. Tesla appealed the amount and got it reduced to 15 million. The guy rejected the 15 million and wanted a retrial on the award, and then got 3.2 million.

  • You should block Censys from scanning your network

    Technology technology
    4
    1
    20 Stimmen
    4 Beiträge
    20 Aufrufe
    S
    I mean, you can, or you can use it to assure your firewall is configured correctly. The entire Internet is scanning you at all times, why would you focus your attention on one of the services who is willing to share their results with you? Believe me, you probably have lower hanging fruit to pick.
  • 26 Stimmen
    10 Beiträge
    106 Aufrufe
    G
    This is unhinged levels of virtue signalling. Just downvote or say you think the joke is distasteful.
  • Breakfast Cereals Market expected to reach USD 104.07 billion by 2032

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • We need to stop pretending AI is intelligent

    Technology technology
    331
    1
    1k Stimmen
    331 Beiträge
    3k Aufrufe
    dsilverz@friendica.worldD
    @technocrit While I agree with the main point that "AI/LLMs has/have no agency", I must be the boring, ackchyually person who points out and remembers some nerdy things.tl;dr: indeed, AIs and LLMs aren't intelligent... we aren't so intelligent as we think we are, either, because we hold no "exclusivity" of intelligence among biosphere (corvids, dolphins, etc) and because there's no such thing as non-deterministic "intelligence". We're just biologically compelled to think that we can think and we're the only ones to think, and this is just anthropocentric and naive from us (yeah, me included).If you have the patience to read a long and quite verbose text, it's below. If you don't, well, no problems, just stick to my tl;dr above.-----First and foremost, everything is ruled by physics. Deep down, everything is just energy and matter (the former of which, to quote the famous Einstein equation e = mc, is energy as well), and this inexorably includes living beings.Bodies, flesh, brains, nerves and other biological parts, they're not so different from a computer case, CPUs/NPUs/TPUs, cables and other computer parts: to quote Sagan, it's all "made of star stuff", it's all a bunch of quarks and other elementary particles clumped together and forming subatomic particles forming atoms forming molecules forming everything we know, including our very selves...Everything is compelled to follow the same laws of physics, everything is subjected to the same cosmic principles, everything is subjected to the same fundamental forces, everything is subjected to the same entropy, everything decays and ends (and this comment is just a reminder, a cosmic-wide Memento mori).It's bleak, but this is the cosmic reality: cosmos is simply indifferent to all existence, and we're essentially no different than our fancy "tools", be it the wheel, the hammer, the steam engine, the Voyager twins or the modern dystopian electronic devices crafted to follow pieces of logical instructions, some of which were labelled by developers as "Markov Chains" and "Artificial Neural Networks".Then, there's also the human non-exclusivity among the biosphere: corvids (especially Corvus moneduloides, the New Caleidonian crow) are scientifically known for their intelligence, so are dolphins, chimpanzees and many other eukaryotas. Humans love to think we're exclusive in that regard, but we're not, we're just fooling ourselves!IMHO, every time we try to argue "there's no intelligence beyond humans", it's highly anthropocentric and quite biased/bigoted against the countless other species that currently exist on Earth (and possibly beyond this Pale Blue Dot as well). We humans often forgot how we are species ourselves (taxonomically classified as "Homo sapiens"). We tend to carry on our biological existences as if we were some kind of "deities" or "extraterrestrials" among a "primitive, wild life".Furthermore, I can point out the myriad of philosophical points, such as the philosophical point raised by the mere mention of "senses" ("Because it’s bodiless. It has no senses, ..." "my senses deceive me" is the starting point for Cartesian (René Descartes) doubt. While Descarte's conclusion, "Cogito ergo sum", is highly anthropocentric, it's often ignored or forgotten by those who hold anthropocentric views on intelligence, as people often ground the seemingly "exclusive" nature of human intelligence on the ability to "feel".Many other philosophical musings deserve to be mentioned as well: lack of free will (stemming from the very fact that we were unable to choose our own births), the nature of "evil" (both the Hobbesian line regarding "human evilness" and the Epicurean paradox regarding "metaphysical evilness"), the social compliance (I must point out to documentaries from Derren Brown on this subject), the inevitability of Death, among other deep topics.All deep principles and ideas converging, IMHO, into the same bleak reality, one where we (supposedly "soul-bearing beings") are no different from a "souless" machine, because we're both part of an emergent phenomena (Ordo ab chao, the (apparent) order out of chaos) that has been taking place for Æons (billions of years and beyond, since the dawn of time itself).Yeah, I know how unpopular this worldview can be and how downvoted this comment will probably get. Still I don't care: someone who gazed into the abyss must remember how the abyss always gazes us, even those of us who didn't dare to gaze into the abyss yet.I'm someone compelled by my very neurodivergent nature to remember how we humans are just another fleeting arrangement of interconnected subsystems known as "biological organism", one of which "managed" to throw stuff beyond the atmosphere (spacecrafts) while still unable to understand ourselves. We're biologically programmed, just like the other living beings, to "fear Death", even though our very cells are programmed to terminate on a regular basis (apoptosis) and we're are subjected to the inexorable chronological falling towards "cosmic chaos" (entropy, as defined, "as time passes, the degree of disorder increases irreversibly").
  • 1 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • 558 Stimmen
    99 Beiträge
    2k Aufrufe
    N
    In this year of 2025? No. But it still is basically setting oneself for failure from the perspective of Graphene, IMO. Like, the strongest protection in the world (assuming Graphene even is, which is quite a tall order statement) is useless if it only works on the mornings of a Tuesday that falls in a prime number day that has a blue moon and where there are no ATP tennis matches going on. Everyone else is, like, living in the real world, and the uniqueness of your scenario is going to go down the drain once your users get presented with a $5 wrench, or even cheaper: a waterboard. Because cops, let alone ICE, are not going to stop to ask you if they can make you more comfortable with your privacy being violated.
  • Elon Musk's X temporarily down for tens of thousands of users

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • 62 Stimmen
    6 Beiträge
    65 Aufrufe
    W
    What could possibly go wrong? Edit: reads like the substrate still needs to be introduced first