Key Moments

Chris Gerdes (Stanford) on Technology, Policy and Vehicle Safety - MIT Self-Driving Cars

Lex FridmanLex Fridman
Science & Technology4 min read61 min video
Dec 6, 2017|13,917 views|228|7
Save to Pod
TL;DR

Chris Gerdes discusses automated vehicles, balancing technology, safety, and policy, emphasizing voluntary guidance for innovation.

Key Insights

1

Current vehicle safety standards are slow to adapt to rapid technological advancements like AI in autonomous vehicles.

2

The Federal Automated Vehicle Policy offers voluntary guidance, including a 15-point safety assessment, for AV development and testing.

3

Operational Design Domain (ODD) and minimal risk fallback conditions are crucial for defining where and how AVs should operate safely.

4

Validation methods for AVs can include test tracks, real-world driving miles, and simulation, with no single approach mandated.

5

Ethical considerations in AVs are approached through engineering principles focusing on risk reduction and societal well-being, not just 'trolley car problems'.

6

Data sharing, particularly for edge cases, is vital for accelerating AV safety improvements, with aviation's ASIAS system as a model.

THE EVOLVING LANDSCAPE OF VEHICLE SAFETY STANDARDS

Traditional vehicle safety standards, established through processes like the National Traffic and Motor Vehicle Safety Act of 1966, are based on minimum performance requirements with objective tests. However, these rulemaking processes are very time-consuming, often taking seven years or more. This slowness poses a significant challenge for rapidly evolving technologies like deep learning and AI in autonomous vehicles (AVs), where solutions developed today could be outdated by the time regulations are finalized. The current system relies on manufacturer self-certification, which may not be agile enough for the pace of innovation in AVs, prompting a need for new approaches.

THE FEDERAL AUTOMATED VEHICLE POLICY FRAMEWORK

Recognizing the limitations of traditional standards, the U.S. Department of Transportation introduced the Federal Automated Vehicle Policy, offering voluntary guidance rather than strict regulations. This policy encourages manufacturers to voluntarily follow specific guidance and submit safety assessments. It's designed to foster innovation by allowing companies to define their own safety approaches, with the expectation that best practices will emerge over time. This proactive, guidance-based framework aims to balance public safety with the need for accelerated testing and development of AVs, which is crucial for gathering real-world data.

OPERATIONAL DESIGN DOMAIN AND MINIMAL RISK CONDITIONS

A key component of the federal guidance is the concept of the Operational Design Domain (ODD), which requires manufacturers to clearly define the specific conditions under which their AV systems are intended to operate. This includes factors like geographical area, road types, weather conditions, and time of day. Alongside the ODD, developers must define minimal risk or fallback conditions, outlining what the system will do if it encounters a situation outside its ODD or if a system failure occurs. This approach allows for diverse AV designs, from low-speed shuttles to highway-capable vehicles, ensuring they operate within defined safe parameters.

VALIDATION METHODS AND ETHICAL CONSIDERATIONS

The guidance acknowledges various methods for validating AV safety, including test tracks, real-world driving with extensive mileage, and simulation. Each method has limitations; test tracks lack real-world unpredictability, real-world testing may not cover rare edge cases, and simulations must accurately reflect real-world complexities. Crucially, the policy addresses ethical considerations, moving beyond abstract 'trolley car problems' to practical engineering challenges. Manufacturers are prompted to consider how their AVs interact with pedestrians and other road users, using risk reduction principles similar to how automatic emergency braking systems already differentiate between obstacles like vehicles and humans.

THE ROLE OF LEARNING VS. PROGRAMMED RULES

A central debate lies in whether AVs should be programmed with fixed rules or learn from data, mirroring human driving behavior. While human error causes a vast majority of accidents, simply replicating human driving might not achieve the full safety potential of automation. Conversely, purely rule-based systems struggle with the infinite variety of real-world scenarios. The challenge lies in finding a balance, leveraging learning algorithms to adapt to unforeseen situations while ensuring robust safety, and potentially exceeding human capabilities in areas like precision and reaction time, as demonstrated by advanced research vehicles.

ACCELERATING SAFETY THROUGH DATA SHARING AND COLLABORATION

The transcript highlights the immense potential of data sharing, especially for rare, critical 'edge case' scenarios, to accelerate AV safety. By creating shared databases of such events, developers can train AI models more effectively. Inspired by aviation's ASIAS system, where airlines anonymously share safety data, a similar collaborative model could benefit the AV industry. Despite intellectual property and privacy concerns, creating frameworks for sharing anonymized or aggregated data could foster trust, inform regulators, and lead to safer vehicles for everyone, demonstrating a path towards a proactive safety culture and global harmonization of best practices.

Common Questions

The US relies on a system of federal motor vehicle safety standards, which are minimum performance requirements. Manufacturers self-certify that their vehicles meet these standards before they are sold, unlike pre-market certification systems in other parts of the world.

Topics

Mentioned in this video

More from Lex Fridman

View all 546 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free