So far, the safety or reliability of self-propelled car was mainly to conditions of theoretical assumptions topic. The fatal accident of a Tesla Model S owners in the US, this has changed fundamentally. The key question, carried about the companies as private individuals in relation to autonomous driving is: how safe the technology really is? Besides arises after the fatal autopilot crash but also the question of whether humanity is ready for a responsible use of (semi-) autonomous vehicles
autopilot software. First fatal accident
Said accident took place on May 7, 2016 Williston, Florida from: a Tesla Model S was racing in a truck trailer, by the impact of the roof of the Model S was almost completely demolished. It is the first fatal accident involving a vehicle that the technology of autonomous driving – based on software, sensors and cameras – uses. The driver of the car was identified by the Florida Highway Patrol than the 40-year-old American Joshua Brown. The exact circumstances of the accident is currently There are still divided – the National Highway Traffic Safety Administration (NHTSA) has opened an investigation of what happened. Tesla Motors also has commented on the accident. In a blog post, it means that the sensors of the autopilot had not recognized the white-painted truck trailer from the bright horizon.
As announced recently, the NHTSA examined, however, still a second accident involving a (semi -) autonomous Tesla model: on July 1, a copy of the SUV model Model X had beat in Pennsylvania. The owner stated that the vehicle had steered to engage the autopilot in the barrier before it finally made a headstand. Tesla commented on the incident with the following statement: “The judging us information presently available to, there is no reason to believe that the autopilot has something to do with this accident.”
“Full automation is not the logical culmination “
When it comes to David Mindell, author of” Our robots, Ourselves: Robotics and the Myths of Autonomy “is the idea of autonomous driving generally a rather poor. Mindell is also a professor at the renowned MIT Department of Aeronautics and Astronautics and has to support his thesis point to the space program Apollo, under which already six moon landings were carried out. These were initially planned as a completely autonomous – the astronauts should be only passive passengers. After various tests and studies, the spaceman but many critical functions had to manually control – for example, the landing process
Citing a MIT concept paper said Mindell in an interview with the MIT Technology Review, the level of automation in all possible. projects let be classified on a scale of 1 to 10, however, leading a larger automation level does not necessarily give better results. “The Apollo computers allowed the crew to build a spaceship that although having a lower degree of automation, for it however the perfection comes pretty close. The sophistication of computers and software was not used to replace humans, but these better to give and comprehensive control options at hand. “
As another example Mindell leads the aerospace industry. Although the airlines took advantage of automation – in the form of the autopilot and landing Wizard – yet a professionally trained pilot was still necessary to manage the systems of the aircraft and to make the right decisions quickly in critical situations. It is therefore reasonable to hope so Mindell that vehicles with autonomous driving functions of the driver remove various tasks in the future, so that it could concentrate better on driving. The complete automation, however is not the logical culmination in the development of vehicles.