Browse Publications Technical Papers 2021-01-0195
2021-04-06

Building Responsibility in AI: Transparent AI for Highly Automated Vehicle Systems 2021-01-0195

Replacing a human driver is an extraordinarily complex task. While machine learning (ML) and its’ subset, deep learning (DL) are fueling breakthroughs in everything from consumer mobile applications to image and gesture recognition, significant challenges remain. The majority of artificial intelligence (AI) learning applications, particularly with respect to Highly Automated Vehicles (HAVs) and their ecosystem have remained opaque - genuine “black boxes.” Data is loaded into one side of the ML system and results come out the other, however, there is little to no understanding at how the decision was arrived at.
To make these systems accurate, these AI systems require lots of data to crunch and the sheer computational complexity of building these DL based AI models also slows down the progress in accuracy and the practicality of deploying DL at scale. In addition, the training times and the forensic decision investigation — often measured in days, sometimes weeks and months — slows down implementation and makes traditional agile approaches with their definition of done almost impossible to follow.
Recent breakthroughs have allowed ML systems in a HAV implementation context to determine reasonable solutions in very fixed scenarios. However, these systems are typically very complex and largely incapable of explaining how or why they came up with that solution. Without this knowledge and reasoning, intervention and proof of compliance during HAV development, validation, verification, and production applications is near impossible. To cut development and forensic time it takes to create and understand DL models with high precision, decisions must be understood, and reasoning applied.
While significant breakthroughs have been made in Explainable AI (XAI) through DL technologies such as recursive methods, and Cognitive AI (CAI) through user interfaces (UI), they all commonly fail at “transparency”. Transparency is the ability to have access to the logic behind a decision made by a ML system. This is a requirement to establishing trust in high risk and high human cost applications such as an HAV. This paper will outline how a solution based on Knowledge Representation and Reasoning (KRR) creates a “holistic AI” approach that enables both knowledge on how a HAV machine learning system arrives at decisions, and provides the rational or reasoning through the provisioning of new insights into what would typically be a blind process. This “Transparent AI” solution will be explored through an algorithmic approach and then demonstrated through a software implementation within Baidu’s Apollo model framework.

SAE MOBILUS

Subscribers can view annotate, and download all of SAE's content. Learn More »

Access SAE MOBILUS »

Members save up to 16% off list price.
Login to see discount.
Special Offer: Download multiple Technical Papers each year? TechSelect is a cost-effective subscription option to select and download 12-100 full-text Technical Papers per year. Find more information here.
We also recommend:
JOURNAL ARTICLE

Autonomous Vehicle Safety Assessment with Fully Quantified ODDs

2021-01-1011

View Details

TECHNICAL PAPER

To Err Is Human: The Role of Human Derived Safety Metrics in an Age of Automated Vehicles

2021-01-0875

View Details

RESEARCH REPORT

Legal Issues Facing Automated Vehicles, Facial Recognition, and Privacy Rights

EPR2022016

View Details

X