Steering wheel with automated driving controls

A highlight of NVIDIA’s Alpamayo autonomous driving platform—set to launch on the 2026 Mercedes‑Benz CLA—is a capability the company calls Cooperative Steering.

Unlike Tesla’s Autopilot or Full Self-Driving (FSD), which generally operate as an on/off system, NVIDIA’s approach allows a middle ground. If the driver momentarily turns the wheel to avoid a pothole or shift over for a cyclist, the system doesn’t immediately switch off.

Instead, it works alongside the driver, accepting steering input while maintaining lateral control, and then smoothly takes over again once the driver releases the wheel.

For drivers accustomed to Tesla’s torque‑based disengagement—where resisting the wheel quickly hands control back to the human—this is a notable difference. Although you can gently nudge Autopilot or FSD without a full disengagement today, the latitude to do so is far more limited than what NVIDIA describes.

Should Tesla incorporate this behavior, or is it a bridge technology on the way to full autonomy? See NVIDIA’s demonstration below:

The case for cooperative steering

Drivers sometimes experience “intervention anxiety.” When the vehicle hesitates or positions itself awkwardly, they have to choose: let FSD resolve the situation and risk a curb or other mistake, or take over and disengage the system. A cooperative mode reduces this friction—nudging the car is enough, and automated driving can continue afterward.

It can also help when FSD isn’t taking the exact route you prefer. You could execute a turn yourself and then simply release the wheel, allowing the system to carry on.

Research on automated driving safety suggests the most hazardous moment in semi‑autonomous systems is the handoff. When FSD disengages, steering effort can drop from assisted torque to none instantly; if the driver isn’t prepared for the weight transfer, the result can be a swerve. Cooperative steering diminishes the need for abrupt corrections and helps keep the safety net in place. See, for example, this discussion of emerging risks with Level 3 systems.

The case against

A straightforward argument against adding this to Tesla is focus: the target isn’t FSD (Supervised) but FSD (Unsupervised). Today’s supervised system is a step toward full autonomy, and time spent building interim features could instead accelerate core FSD improvements.

Tesla’s AI philosophy emphasizes that human interventions are errors and that the end state is autonomy; optimizing the in‑between could be viewed as a distraction. There’s also a risk that cooperative behavior could hide failure modes that FSD should solve outright.

Another safety concern is role clarity: who is actually controlling the car? With a more binary system, it’s obvious when the driver is fully in charge. Blurring that line can foster complacency or misinterpretation—for instance, assuming the vehicle will handle braking at a light while the driver steers around an obstacle.

For robotaxis, cooperative steering is moot—there’s no steering wheel. For today’s consumer vehicles, however, NVIDIA’s approach may offer a smoother experience. The open question is how long this remains relevant: with FSD v14, Tesla appears to be edging closer to true autonomy, and the need to touch the wheel continues to decline.

Latest Stories

This section doesn’t currently include any content. Add content to this section using the sidebar.