It will be imperative for FSD customer safety that Tesla releases accurate and robust Autopilot statistics

In the recent interview with Sandy Munro Elon tossed out a small statement that has flown under the radar. He said that the reason FSD Beta hasn’t been released widely is because it’s inherently dangerous to release a driving solution which appears to work correctly causing drivers to become complacent. I understand why Elon would say this, he’s absolutely correct, but if he truly means it we may not be seeing a public FSD Beta for years–until FSD is at least as safe as a human, or Tesla needs more robust driver monitoring and education.

What’s worse than a 99% solution at lulling you into a sense of false security? A 99.9% solution. What’s worse than a 99.9% solution at lulling you into complacency? A 99.99% solution. Etc…

There is very little worse than a vehicle that kills somebody every 10,000 miles. Such a system would “feel” perfect but be extremely deadly since it would result in effectively every Tesla customer dying in the first year of activation.

Currently Autopilot is so obviously unreliable that only a fool would use it without paying attention for more than a couple seconds here and there. Every time AP makes a mistake your decision to pay attention is rewarded. But we could be nearing a point where those rewards are fewer and further between. Covid safety is a great example of what happens when “only” 1% of the population is killed by something. “Well I don’t know anyone who died from ___ so I’m not going to do anything.”

If something works perfectly for 11 months straight you are not going to be ready for day 360 when it suddenly swerves into a telephone pole. Uber was reporting a disengagements every 13 miles in the city when one of their drivers killed a pedestrian while watching TV. (By comparison u/DirtyTesla on FSD Beta v8.1 just reported about 2-3 miles per Disengagement) Forget 13 miles, just wait until we’re up to 1,000 or 10,000.

The only way that Tesla customers will be able to understand the risks is to be told about them. My fear is that Tesla will continue putting out charts whose purpose is to serve as marketing instead to serve as a warning to customers. Already I hear “Autopilot is safer than a human!” or “Autopilot already is safer than most drivers on the road!” which is both extremely inaccurate and also bolstered by Tesla’s published safety statistics which include a human safety driver. The only thing that will convince people that FSD is not safe enough to override the driver monitoring is to regularly publish accurate disengagement reports. We need to know the true risks of an unsupervised FSD system so that we can on a very regular basis see proof that it’s still unsafe to be used unsupervised. The danger without that information is that FSD Customers begin believing that the driver monitoring nothing but a bureaucratic obstacle instead of a legitimate safety feature.

Ideally the regular report would be relative to Human Safety.

September 2021 – FSD v16 – v25
5.3% Human Safety – [A crash estimated every 25,000 miles]

Something easy to understand and also drives home the fact that even though it feels safe, it’s still relatively very dangerous if left unsupervised. The statistics should be relative to our best data for human drivers. Safety should be calculated based on road segments and road conditions for the best Apples to Apples comparisons. Night vs Day, Interstate vs City, Urban vs Rural, Highspeed vs lowspeed, Rain vs Clear etc. It might even make sense to publish multiple statistics. “50% human safety during the day on the interstate. 1% human safety at night in the city in the rain.”

Not only will it provide an essential metric to reinforce the need for driver supervision but it’ll also create a helpful framework for regulators to provide feedback early in what they need to see data wise before approving an FSD system.

submitted by /u/im_thatoneguy
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *