Are you still using YOLO after the BiFPN for feature extraction? The illustration of the architecture from the previous Tesla AI Day didn't show it anymore, but it wasn't clear if it was dropped from the architecture, or was present but just not shown on the slide.
So much hard work goes behind invention, it’s very easy to dismiss and say FSD beta sucks when it’s make mistakes but things are going in right direction.
nice direction like how an experienced human driver approaches it. however, I do think there is need for multiple solutions depending on driving environment/situations where FSD switches modes dynamically e.g. highway, parking lot, garage, animal/human presence, severe weather..
I'm really not impressed by the collision avoidance. I mean it is nice they can do this now, but collision avoidance is something pretty basic in motion planning. For example M. Werling et.al. did comparable things 2010 with classic methods.
Everything is a movable object 🙂 Good that will eventually solve for containers falling of trucks, rock falling of a mountain slope, truck or car driving of an overpass, walls of buildings that collapse, etc, etc.
Anybody knows if Tesla is storing knowledge generated while driving? Or plans to do so? Like when a new builidng site with a traffic light is opened just behind a curve. We humans adjust our driving according to new situations we encountered in the past.
25:00 is a case where people will tell the newspapers the cars accelerated itself and brake pedal didnt work – and newspaper will print that BS because Tesla
Great presentation. It has been really nice to see how Tesla has changed the direction of their FSD architecture and every time getting a little closer to solving FSD.
This was fantastic. I'd like to think I understood most of it, might need a few re-watches. But it's possible to see where FSD is going and how close it is to be publicly available to anyone who wants it.
Very interesting. The ego car should assume that every other car is trying to avoid collisions in a similar way, and also assume that every other car assumes that every other car is assuming this and everyone is changing their courses accordingly. It gets pretty complex! Would love to see simulations of hundreds of cars running this and see how they behave.
Would love to hear how phantom breaking plays into this. I took a long trip recently and had 14 occurrences of phantom breaking on the interstate…very annoying and not safe if a person doesn't react appropriately to the false breaking. Does anyone have any insight?
And my Tesla still tries to swerve off the road every time there is a turning lane. They really need to test these outside of California.
Good luck to anyone trying to catch up with Tesla, This is what real innovation means 🙂 Thank you Ashok and Tesla Team 🙂
Thank you Ashok for all of your hard work and for presenting this to the world.
without Infrared, it is of no use. I didn't read completely, but I believe they have not mentioned it.
how about collapsing bridges, mud slides, sudden holes in the road ?
Very cool and informative!
24:39 what kind of drugs do you have to be on to do something like this by mistake???
Are you still using YOLO after the BiFPN for feature extraction? The illustration of the architecture from the previous Tesla AI Day didn't show it anymore, but it wasn't clear if it was dropped from the architecture, or was present but just not shown on the slide.
I totally understood everything in this presentation!! 😂
So much hard work goes behind invention, it’s very easy to dismiss and say FSD beta sucks when it’s make mistakes but things are going in right direction.
You should consider using LiDAR or radar to see exactly how far behind the competition is.
nice direction like how an experienced human driver approaches it. however, I do think there is need for multiple solutions depending on driving environment/situations where FSD switches modes dynamically e.g. highway, parking lot, garage, animal/human presence, severe weather..
Thank you!
20:30 GOLD – right there – forget FSD L4/L5 for now – just licence this got Apple/Google and BOOOOM
Train NeRF in single shot? My toy training projects train LEGO model for 24hrs…🤣
Amazing. Thank you for this. 👍🏻
I'm really not impressed by the collision avoidance. I mean it is nice they can do this now, but collision avoidance is something pretty basic in motion planning. For example M. Werling et.al. did comparable things 2010 with classic methods.
Massive architecture changes! Awesome work guys! I bet some of this makes into Optimus! 😉
Can’t believe how much inference performance you keep extracting from HW3. Now processing 36fps with higher accuracy and more capability. Insane!!
Thank you for the upload! Can't wait for FSD to come true eventually!! <3
Does Tesla have collision avoidance while backing up?
Everything is a movable object 🙂 Good that will eventually solve for containers falling of trucks, rock falling of a mountain slope, truck or car driving of an overpass, walls of buildings that collapse, etc, etc.
Anybody knows if Tesla is storing knowledge generated while driving? Or plans to do so? Like when a new builidng site with a traffic light is opened just behind a curve. We humans adjust our driving according to new situations we encountered in the past.
25:00 is a case where people will tell the newspapers the cars accelerated itself and brake pedal didnt work – and newspaper will print that BS because Tesla
Great presentation. It has been really nice to see how Tesla has changed the direction of their FSD architecture and every time getting a little closer to solving FSD.
This was fantastic. I'd like to think I understood most of it, might need a few re-watches. But it's possible to see where FSD is going and how close it is to be publicly available to anyone who wants it.
Very interesting. The ego car should assume that every other car is trying to avoid collisions in a similar way, and also assume that every other car assumes that every other car is assuming this and everyone is changing their courses accordingly. It gets pretty complex! Would love to see simulations of hundreds of cars running this and see how they behave.
Would love to hear how phantom breaking plays into this. I took a long trip recently and had 14 occurrences of phantom breaking on the interstate…very annoying and not safe if a person doesn't react appropriately to the false breaking. Does anyone have any insight?
For depth perception… why not use two cameras where each single camera is, to. Get stereo vision like humans have?……..