Tesla Robotaxi Freaks Out and Drives into Oncoming Traffic on First Day
-
Driving is culturally specific, even. The way rules are followed and practiced is often regionally different
This is one of the problems driving automation solves trivially when applied at scale. Machines will follow the same rules regardless of where they are which is better for everyone
The ethics of putting automation in control of potentially life threatening machines is also relevant
You'd shit yourself if you knew how many life threatening machines are already controlled by computers far simpler than anything in a self driving car. Industrially, we have learned the lesson that computers, even ones running on extremely simple logic, just completely outclass humans on safety because they do the same thing every time. There are giant chemical manufacturing facilities that are run by a couple guys in a control room that watch a screen because 99% of it is already automated. I'm talking thousands of gallons an hour of hazardous, poisonous, flammable materials running through a system run on 20 year old computers. Water chemical additions at your local water treatment plant that could kill thousands of people if done wrong, all controlled by machines because we know they're more reliable than humans
With humans we can attribute cause and attempted improvement, with automation its different.
A machine can't drink a handle of vodka and get behind the wheel, nor can it drive home sobbing after a rough breakup and be unable to process information properly. You can also update all of them all at once instead of dealing with PSA canpaigns telling people not to do something that got someone killed. Self driving car makes a mistake? You don't have to guess what was going through its head, it has a log. Figure out how to fix it? Guess what, they're all fixed with the same software update. If a human makes that mistake, thousands of people will keep making that same mistake until cars or roads are redesigned and those changes have a way to filter through all of society.
I just don't see a need for this at all. I think investing in public transportation more than reproduces all the benefits of automated cars without nearly as many of the dangers and risks.
This is a valid point, but this doesn't have to be either/or. Cars have a great utility even in a system with public transit. People and freight have to get from the rail station or port to wherever they need to go somehow, even in a utopia with a perfect public transit system. We can do both, we're just choosing not to in America, and it's not like self driving cars are intrinsically opposed to public transit just by existing.
What are you anticipating for the automated driving adoption rate? I'm expecting extremely low as most people cannot afford new cars. We are talking probably decades before there are enough automated driving cars to fundamentally alter traffic in such a way as to entirely eliminate human driving culture.
In response to the "humans are fallible" bit ill remark again that algorithms are very fallible. Statistically, even. And while lots of automated algorithms are controlling life and death machines, try justifying that to someone who's entire family is killed by an AI. How do they even receive compensation for that? Who is at fault? A family died. With human drivers we can ascribe fault very easily. With automated algorithms fault is less easily ascribed and the public writ large is going to have a much harder time accepting that.
Also, with natural gas and other systems there are far fewer variables than a busy freeway. There's a reason why it hasn't happened until recently. Hundreds of humans all in control of large vehicles moving in a long line at speed is a very complicated environment with many factors to consider. How accurately will algorithms be able to infer driving intent based on subtle movement of vehicles in front of and behind it? How accurate is the situational awareness of an algorithm, especially when combined road factors are involved?
Its just not as simple as its being made out to be. This isnt a chess problem, its not a question of controlling train cars on set tracks with fixed timetables and universal controllers. The way cars exist presently is very, very open ended. I agree that if 80+% of road vehicles were automated it would have such an impact on road culture as to standardize certain behaviors. But we are very, very far away from that in North America. Most of the people in my area are driving cars from the early 2010s. Its going to be at least a decade before any sizable amount of vehicles are current year models. And until then algorithms have these obstacles that cannot easily be overcome.
Its like I said earlier, the last 10% of optimization requires an exponentially larger amount of energy and development than the first 90% does. Its the same problem faced with other forms of automation. And a difference of 10% in terms of performance is... huge when it comes to road vehicles.
-
I saw the Tesla Robotaxi:
- Drive into oncoming traffic, getting honked at in the process.
- Signal a turn and then go straight at a stop sign with turn signal on.
- Park in a fire lane to drop off the passenger.
And that was in a single 22 minute ride. Not great performance at all.
Watching anything that fElon fail sparks joy.
-
What real world problem does this solve?
Nothing that a train + scooter / bicycle cannot solve imo
-
I am entirely opposed to driving algorithms. Autopilot on planes works very well because it is used in open sky and does not have to make major decisions about moving in close proximity to other planes and obstacles. Its almost entirely mathematical, and even then in specific circumstances it is designed to disengage and put control back in the hands of a human.
Cars do not have this luxury and operate entirely in close proximity to other vehicles and obstacles. Very little of the act of driving a car is math. It's almost entirely decision making. It requires fast and instinctive response to subtle changes in environment, pattern recognition that human brains are better at than algorithms.
To me this technology perfectly encapsulates the difficulty in making algorithms that mimic human behavior. The last 10% of optimization to make par with humans requires an exponential amount more energy and research than the first 90% does. 90% of the performance of a human is entirely insufficient where life and death is concerned.
Investment costs should be going to public transport systems. They are more cost efficient, more accessible, more fuel/resource efficient, and far far far safer than cars could ever be even with all human drivers. This is a colossal waste of energy time and money for a product that will not be par with human performance for a long time. Those resources could be making our world more accessible for everyone, instead they're making it more accessible for no one and making the roads significantly more dangerous. Capitalism will be the end of us all if we let them. Sorry that train and bus infrastructure isnt "flashy enough" for you. You clearly havent seen the public transport systems in Beijing. The technology we have here is decades behind and so underfunded its infuriating.
I always have the same thought when I see self driving taxi news.
"Americans will go bankrupt trying to prop up the auto/gas industries rather than simply building a train".
And it's true. So much money is being burned on a subpar and dangerous product. Yet we've just cut and cancelled extremely beneficial high speed rail projects that were overwhelmingly voted for by the people.
-
Public transport systems are just part of a mobility solution, but it isn't viable to have that everywhere. Heck, even here in The Netherlands, a country the size of a post stamp, public transport doesn't work outside of the major cities. So basically, outside of the cities, we are also relying on cars.
Therefore, I do believe there will be a place for autonomous driving in the future of mobility and that it has the potential to reduce number of accidents, traffic jams and parking problems while increasing the average speed we drive around with.
The only thing that has me a bit worried is Tesla's approach to autonomous driving, fully relying on the camera system. Somehow, Musk believes a camera system is superior to human vision, while it's not. I drive a Tesla (yeah, I know) and if the conditions aren't perfect, the car disables "safety' features, like lane assist. For instance when it's raining heavily or when the sun is shining directly into the camera lenses. This must be a key reason in choosing Austin for the demo/rollout.
Meanwhile, we see what other manufacturers use and how they are progressing. For instance, BMW and Mercedes are doing well with their systems, which are a blend of cameras and sensors. To me, that does seem like the way to go to introduce autonomous driving safely.
I believe Austin was chosen because they're fairly lax about the regulations and safety requirements.
Waymo already got the deal in Cali. And Cali seems much more strict. Austin is offering them faster time to market as the cost of civilian safety.
-
A man who can’t launch a rocket to save his life is also incompetent at making self driving cars? His mediocrity knows no bounds.
To be fair Musk only has money and doesnt Do shit at either Company
-
Which brings up an interesting question, when is a driverless car 'parked' vs. 'stopped'?
When the engine is off?
Of course, how to tell this with an electric car?
-
It already jumped up about 10% on monday simply because the service launched. Even if the service crashes and burns, they'll jump to the next hype topic like robots or AI or whatever and the stock price will stay up.
And its fallen back down about 30% of those gains already. Hype causes spikes.. that's nothing new.
-
You seem to have missed the point. Whether or not you think that would be an easy job, the whole reason you'd be there is to be the one that takes all the blame when the autopilot kills someone. It will be your name, your face, every single record of your past mistakes getting blasted on the news and in court because Elon's shitty vanity project finally killed a real child instead of a test dummy. You'll be the one having to explain to a grieving family just how hard it is to actually pay complete attention every moment of every day, when all you've had to do before is just sit there.
How about you pay attention and PREVENT the autopilot from killing someone? Like it's your job to do?
-
So, Tesla Robitaxis drive like a slightly drunk and confused tourist with asshole driving etiquette.
Those right turns on red were like, "oh you get to go? That's permission for me to go too!"
I know many people who believe that "right on red" means they have the right of way to make the turn and don't have to stop first or yield to traffic.
-
To be fair Musk only has money and doesnt Do shit at either Company
It's hilarious to me that Musk claims to work 100 hours a week but he's the CEO of five companies. Even if the claim were true (and of course it isn't) it means being the CEO of one of his companies is a 20-hour-a-week job at best.
-
I know many people who believe that "right on red" means they have the right of way to make the turn and don't have to stop first or yield to traffic.
I know many
peoplefucking morons -
To be fair Musk only has money and doesnt Do shit at either Company
He meddles. That much is apparent. The cybertruck is obviously a top down design as evidenced by the numerous atrocious design compromised the engineers had to make just to make it real. From the glued on "exoskeleton" to the hollowed ALUMINUM frame to the complete lack of physical controls to the default failure state turning it into a coffin to the lack of waterproofing etc.
-
Still work to be done, it uses the blinkers.
At least they were used incorrectly to be just as unpredictable.
-
That wasn’t FSD. In the crash report it shows FSD wasn’t enabled. The driver applied a torque to the steering wheel and disengaged it. They were probably reaching into the back seat while “supervising”.
I covered that crash.
FSD is never enabled at the moment of impact, because FSD shuts off less than a second before impact, so that Tesla's lawyers and most loyal fans can make exactly the arguments you are making. Torque would be applied to the steering wheel when any vehicle departs the roadway, as the driver is thrown around like a ragdoll as they clutch the wheel. Depending on the car, torque can also be applied externally to the tires by rough terrain to shift the steering wheel.
No evidence to suggest the driver was distracted. Prove me wrong if you have that evidence.
Also welcome to the platform, new user!
-
I covered that crash.
FSD is never enabled at the moment of impact, because FSD shuts off less than a second before impact, so that Tesla's lawyers and most loyal fans can make exactly the arguments you are making. Torque would be applied to the steering wheel when any vehicle departs the roadway, as the driver is thrown around like a ragdoll as they clutch the wheel. Depending on the car, torque can also be applied externally to the tires by rough terrain to shift the steering wheel.
No evidence to suggest the driver was distracted. Prove me wrong if you have that evidence.
Also welcome to the platform, new user!
Tesla counts any crash within 5 seconds of FSD disengagement as an FSD crash. Where is the cabin camera footage of the driver not being distracted?
Here is a video that goes over it: https://youtube.com/watch?v=JoXAUfF029I
Thanks for the welcome, but I’m not new just a lemm.ee user.
-
That wasn’t FSD. In the crash report it shows FSD wasn’t enabled. The driver applied a torque to the steering wheel and disengaged it. They were probably reaching into the back seat while “supervising”.
You weren't the user who posted that video, but you seem to be quite knowledgeable in this specific case...
Can you link that crash report? Or can you cite some confirmed details about the incident?
-
You weren't the user who posted that video, but you seem to be quite knowledgeable in this specific case...
Can you link that crash report? Or can you cite some confirmed details about the incident?
See this video: https://youtube.com/watch?v=JoXAUfF029I
-
Ah, Ok.
I agree with accountability, but not with the point system. That's almost like a "three strikes" rule for drunk drivers.
That's not really accountability, that's handing out free passes.
That's almost like a "three strikes" rule for drunk drivers.
Oh man, that would be amazing. If after 3 strikes, all drunk driving could be eliminated... If only we could be so lucky.
He's not talking about a per-vehicle points system, he's talking about a global points system for Tesla inc. If after a few incidents, essentially Tesla FSD had it's license revoked across the whole fleet, I mean, that's pretty strict accountability I'd say. That's definitely not handing out free passes, it's more like you get a few warnings and a chance to fix issues before the entire program is ended nation wide.
-
How about you pay attention and PREVENT the autopilot from killing someone? Like it's your job to do?
This is sarcasm, right?