Breaking news from around the world.
News that is American but has an international facet may also be posted here.
These guidelines will be enforced on a know-it-when-I-see-it basis.
For US News, see the US News community.
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said “NO, WE WILL ONLY USE VISION CAMERA SENSORS.” And that doesn’t work.
Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can’t.
How do we prove we’re not robots? Fucking select the picture with traffic lights or buses, right? How was this allowed.
“Honey, the car ordered itself new tires again!”
We also use way more than just our eyes to navigate. We have accelerometers (ear canals), pressure sensors (touch), Doppler sensors (ears) to augment how we get around. It was a fools errand to try and figure everything out just with cameras.
Also you can alter the vision input by moving your head, blocking the sun with your hand etc.
This seems like a classic case of ego from Musk.
deleted by creator
He’s such a fucking moron
This news is months old. Honestly agree with musk on this one. We are able to drive with 2(sometimes only 1)low resolution(sometimes out of focus, sometimes closed) cameras on a pivot inside the vehicle with further blindspots all around. Much of our rear situational awareness comes from 2/3 small warped mirrors strategically placed to enhance those 2 low resolution cameras on a pivot. Tesla has already reverted to add some radar back in… The lidar option sounds like dystopia waiting to happen (just imagine all streets filled with aftermarket invisible lasers from 3rd world counties, any one of them could blind you under unlucky circumstances). The best way forward is visual, and if you watch up to date test drives on YouTube you can see they are doing quite well with what they have.
What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.
I don’t see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?
Sounds to me like they should full steam ahead with new sensors, they will never deliver on what they’ve promised with the tech they are using today.
Old customers situation won’t change and it would only be better going forward.