‘Autopilot’ Tesla fatality is a reminder to pace and structure autonomous car development

In late May of this year, Tesla announced that some 70,000 of its cars utilising its “Autopilot” assisted driving feature had driven 100 million miles on the road. This milestone was notable not only because it by far exceeded any competitors’ record but also because, as some commentators noted, on average, a fatality occurs on US roads every 100 million miles driven.

What has only just been reported however is that sadly the first fatality involving a Tesla in Autopilot had already occurred on 7 May in Florida when a Tesla Model S in Autopilot mode collided with a tractor-trailer, killing the Tesla driver. A National Highway Traffic Safety Administration (NHTSA) investigation has now been opened.

Whilst the public response to this incident will undoubtedly be of interest to the Connected and Autonomous Vehicle (CAV) sector, it should be borne in mind that the cause (and the resilience of future mitigation) is currently unclear but the focus of any investigation is likely to be as much on Tesla’s approach to testing driver assistance features and software use as to the concept of CAV regulation.

Commentators and industry experts have regularly noted that Tesla’s startlingly successful vehicle programme (guided by its visionary founder, Elon Musk), has threatened to outstrip the pace of regulation in the US. Autopilot-enabled Tesla cars have been operating on an everyday basis on public roads since 2014 without significant regulatory impact.

In October 2015, an ‘over-the-air’ downloaded software patch gave the Autopilot feature to tens of thousands more existing owners of hardware-enabled Teslas in a way which appeared largely to have bypassed the NHTSA. Despite the fact that the Autopilot feature is expressly a test feature in “beta” and that, being a driver assistance tool, Tesla advises that drivers should at all times be alert and keep their hands on the wheel, there are numerous YouTube videos of drivers openly operating Teslas in Autopilot with little or no manual control and often dangerously and illegally.

This loose regulatory framework has facilitated a step change in the take up of autonomous features in vehicles but there will be some who say that the take up of driver assistance autonomous software features proceeded with too much haste and too little caution. 

Disruptive innovation should after all only be ‘disruptive’ up to a point. It is probable that the regulatory position on existing vehicles with driver assistance features such as Autopilot and their testing will be reviewed with a view to tightening them up to ensure public safety.

Beyond mere driver assistance tools like Autopilot and looking ahead to true semi-autonomous or autonomous vehicles (where driver control is fully handed over to the vehicle system by design), authorities and regulators are also likely to be more proactive in the need to legislate and provide for common frameworks, standards and approaches to ensure public safety both in terms of hardware (eg sensors) and software (eg cyber-security).

In the UK, the Government’s policy paper “The Pathway to Driverless Cars” has provided a clear structured framework and approach towards the testing and eventual introduction of autonomous vehicles since February 2015. Proactive policy development is being supported by a number of publicly-funded pilot projects in the UK including in Greater Bristol, Greenwich, Milton Keynes and Coventry. The ultimate goal is to introduce autonomous vehicles against the backdrop of robust legal frameworks, insurance and safety standards already put in place.

Burges Salmon is an adviser to the VENTURER and FLOURISH driverless vehicle projects funded by Innovate UK. VENTURER and FLOURISH partners include Axa UK plc, Atkins, Bristol City Council and South Gloucestershire Council.

———————-

Brian Wong

Brian Wong – Legal Director, Burges Salmon