How Tesla determines the speed limit for a road is a common question among owners. The simplest explanation is that the car reads posted signs as it drives past, but that only tells part of the story. In practice, the vehicle continuously compares what its cameras observe with pre-existing map data, and the final displayed limit is the result of how those two sources are balanced.

The Power and Pitfalls of Vision

Tesla relies on multiple neural networks as part of the Tesla Vision stack and Full Self-Driving (FSD). One of these networks is trained for optical character recognition (OCR), which lets the car read speed limit signs in real time as it passes them. Vision is especially useful for dynamic situations that static maps cannot capture, for example temporary speed restrictions in construction zones.

Vision does have limitations. Poor visibility can cause the system to miss signs, and even on clear days signs can be obscured by sun glare, overgrown foliage, or other vehicles. Signs placed far off the roadway can go unread as well. The system can also misread characters; a frequently reported misinterpretation is confusing 80 and 60 limits, although most signage is detected correctly.

The Backup: Map Data

To address the imperfections of vision alone, Tesla cross-checks camera readings against high-fidelity map data that includes attributes such as speed limits for specific road segments. Map data provides a stable reference that isn’t affected by weather, passing vehicles, or damaged or missing signs.

The trade-off is that maps can become out of date. If a municipality lowers a road’s speed limit, there can be a delay before the updated limit is reflected in Tesla’s map updates, so the map may temporarily indicate a higher limit than the posted signs.

How They Work Together

The system’s strength is in how it fuses vision and maps. The vehicle continually runs sanity checks between what the cameras see and what the map data indicates.

If the vision system’s confidence is low — for example, when it can’t obtain a clear read on a sign or there’s a direct conflict — the car will often default to the map data, which is typically the safer choice. Conversely, when the vision system has high confidence that it has detected a sign, particularly temporary signage like in construction zones, it will override the map data so the vehicle can adapt to the current conditions.

Occasionally the interaction of these systems can produce confusing behavior: if the map contains an incorrect limit for a GPS segment, the map might temporarily override the vision system and cause the vehicle to try to slow or speed up unexpectedly.

A Real-World Example

An illustrative case is a local road where FSD frequently reads a 40 km/h minimum-speed sign as the applicable limit, even though the actual speed limit is 80 km/h. Instead of immediately dropping to the lower speed, FSD may slow slightly to match traffic flow, continue to display the previous higher limit, and then switch back to 80 km/h on the display once the systems reconcile.

Determining the correct speed limit is a complex balancing act that will improve with continued iterations, but the current approach — combining real-time vision with persistent map data — works effectively in most situations aside from occasional misreads on low-traffic routes. You can read more about pre-mapped data and how it affects what your vehicle sees in a previous article.