One of Tesla’s key technologies lies in tatters, and it is all Musk’s fault.
Nougat
link
fedilink
321Y

tl;dr: Autonomous driving uses a whole host of multiple and different kinds of sensors. Musk said “NO, WE WILL ONLY USE VISION CAMERA SENSORS.” And that doesn’t work.

Guess what? I have eyes; I can see. You know what I want an autonomous vehicle to be able to do? Receive sensory input that I can’t.

kestrel7
link
fedilink
61Y

How do we prove we’re not robots? Fucking select the picture with traffic lights or buses, right? How was this allowed.

“Honey, the car ordered itself new tires again!”

bfg9k
link
fedilink
131Y

We also use way more than just our eyes to navigate. We have accelerometers (ear canals), pressure sensors (touch), Doppler sensors (ears) to augment how we get around. It was a fools errand to try and figure everything out just with cameras.

BedSharkPal
link
fedilink
51Y

Also you can alter the vision input by moving your head, blocking the sun with your hand etc.

This seems like a classic case of ego from Musk.

deleted by creator

He’s such a fucking moron

This news is months old. Honestly agree with musk on this one. We are able to drive with 2(sometimes only 1)low resolution(sometimes out of focus, sometimes closed) cameras on a pivot inside the vehicle with further blindspots all around. Much of our rear situational awareness comes from 2/3 small warped mirrors strategically placed to enhance those 2 low resolution cameras on a pivot. Tesla has already reverted to add some radar back in… The lidar option sounds like dystopia waiting to happen (just imagine all streets filled with aftermarket invisible lasers from 3rd world counties, any one of them could blind you under unlucky circumstances). The best way forward is visual, and if you watch up to date test drives on YouTube you can see they are doing quite well with what they have.

What’s worse is it will be hard to reverse this decision. Tesla is a data and AI company compiling vision and driving data from drivers around the world. If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize. You basically start from scratch at least for the new sensors, and you fail to deliver a promise to old customers.

If you change the sensor format or layout dramatically, all the old data and all the new data becomes hard to hybridize.

I don’t see why that would have to be the case if the new data is a complete superset of the old data. If all the same cameras are there, then the additional sensors and the data those sensors collect can actually help train the processing of the visual-only data, right?

Sounds to me like they should full steam ahead with new sensors, they will never deliver on what they’ve promised with the tech they are using today.

Old customers situation won’t change and it would only be better going forward.

Create a post

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:
  • Where possible, post the original source of information.
    • If there is a paywall, you can use alternative sources or provide an archive.today, 12ft.io, etc. link in the body.
  • Do not editorialize titles. Preserve the original title when possible; edits for clarity are fine.
  • Do not post ragebait or shock stories. These will be removed.
  • Do not post tabloid or blogspam stories. These will be removed.
  • Social media should be a source of last resort.

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 11 users / day
  • 54 users / week
  • 166 users / month
  • 648 users / 6 months
  • 1 subscriber
  • 2.74K Posts
  • 15.1K Comments
  • Modlog