How to Improve the Performance & Safety of Level 2 Automation: Tesla Mountain View Examination

The National Transportation Safety Board’s widely anticipated preliminary report on the crash of a 2017 Tesla Model X on March 23 in California is out.

The National Transportation Safety Board (NTSB) issued last Thursday its preliminary report for the investigation of the fatal, March 23, 2018, crash of a Tesla on U.S. Highway 101 in Mountain View, California.

Information contained in the report is preliminary and subject to change during the NTSB’s ongoing investigation.  Preliminary reports, by their nature, do not contain analysis and do not discuss probable cause and as such, no conclusions about the cause of the crash should be drawn from the preliminary report.

According to performance data downloaded from the crash vehicle, a 2017 Tesla Model X P100D, the driver was using traffic-aware cruise control and autosteer lane-keeping assistance, which are advanced driver assistance features that Tesla refers to as Autopilot. The vehicle was approaching the state Highway 85 interchange, traveling south on U.S. Highway 101, in the second lane from the left — a high-occupancy-vehicle lane.

As the vehicle approached the paved “gore area” dividing the main travel lane of U.S. Highway 101 from the state Highway 85 exit ramp, it moved to the left and entered the gore area at approximately 71 mph, striking a previously damaged, SCI smart cushion crash attenuator system. The speed limit for the roadway is 65 mph. The vehicle’s traffic-aware cruise control was set to 75 mph at the time of the crash.

The Tesla was subsequently involved in collisions with a 2010 Mazda 3 and a 2017 Audi A4. The Tesla’s 400-volt, lithium-ion, high-voltage battery was breached during the crash and a post-crash fire ensued. The Tesla’s driver was found belted in his seat and bystanders removed him from the vehicle before it was engulfed in flames. The Tesla driver suffered fatal injuries while the driver of the Mazda suffered minor injuries and the driver of the Audi was not injured.


What Went Wrong

Among the questions raised: Was Tesla’s driver monitoring system adequate to ensure the driver’s proper use of the Autopilot system? Why was the vehicle silent while driving straight into a concrete median barrier (no warning)? Did the car’s forward radar notice anything amiss while careening into the barrier?

According to VSI analysis the accident was in part caused by the liberal grace period in which Tesla permits before a disengagement. Based on our own research vehicle, you get about two minutes from the time Autopilot is engaged until the system starts prompting you with warnings to grab the wheel.

If you do not grab the wheel, the alerts get more pronounced until the system eventually disengages, and you are presented with a message that says you may no longer use Autopilot for the duration of the trip.

As a driver monitoring solution (DMS) Tesla uses a torque sensor in the steering wheel to measure driver engagement. The system forces engagement because you must apply a little bit of resistance for the vehicle to confirm you are holding the steering wheel.  This is actually a pretty good system in our opinion because it requires a level of engagement.  So unlike traditional DMS you are not monitoring the driver’s attentiveness but rather you are measuring engagement. For a Level 2 system engagement the more important item.  Cadillac’s Supercruise measures both engagement as well as attentiveness as it is monitoring the drivers’ forward-looking pose.

A Vision First Solution

Tesla Autopilot is a “vision first” solution as it requires camera visibility of lanes lines before it can be enabled. If there are no lines the system will simply not work.  Vision only systems work reasonable well under most use cases such as interstates or divided highways. It is also adequate a on two-lane road if you do not have signaled intersections and stop signs as the Tesla has no way to identify those.

Radar is another part of the solution but is secondary to the vision system.  Radar by itself cannot steer a car but can prevent you from hitting something.  Radar does a very good job with car following (a.k.a. Adaptive Cruise Control). But radar is not perfect.  Radar does a poor job on static objects. It must filter out most of them, because if it did not [filter them out], there would be too many false positives. This creates hazards.  VSI has experienced false positive from time to time in its Tesla but usually is nothing more than a rapid slow down when it misinterprets another vehicle in its trajectory.

While camera plus radar does a reasonable job at maintaining Level 2 performance it is not perfect.  If the driver is still in the loop than this is satisfactory approach. Therefore, driver monitoring is so important for any form of automation. 

Another factor that comes into play on a vision first solution is a map. At least a lane model. Without a lane model you have not ground truth to go by.  You don’t necessarily need super accurate precision, but you need to know what a proper trajectory is.   The camera-based lane-keep algorithms really don't know the difference between right and wrong. 

For example, when lines are out of the ordinary the system can easily get confused. In the Mountain View case, the vehicle misinterpreted the lines, and got caught between the two lanes where it hit the barrier.

As we have illustrated, you had one lane that splits in two. The Tesla got confused and thought the area between the two lanes was in fact another lane. I suspect the lane markings were messed up or not properly applied.


In the Mountain View accident, a likely contributing factor is the road surface changes. The dark surface is asphalt while the light surface is concrete. Autopilot may have misinterpreted the change of surfaces as a lane line leading to the improper trajectory. If Autopilot had a lane model and was localizing against that lane model, this type of accident could be prevented. This is why Autopilot (and any other L2 system) require constant driver attention and engagement.

NTSB described this as a "gore area" a triangular-shaped boundary created by white lines marking an area of pavement formed by the convergence or divergence of a mainline travel lane and an exit/entrance lane.

VSI remains optimistic, noting that these accidents are “addressable. If we were Tesla the first this to do is take out the 2+ minutes of the grace period. The second suggestion is a map-based localization method.  There are several ways you could go about this and VSI has written extensively about localization methods.

For both recommendations they could be enabled with software updates and we would expect this to happen within a short amount of time.  


The gradual roll out of active safety and assisted driving features are going to drive auto sales for the next ten years+.  Just as ADAS systems have infiltrated the majority of modern car feature availability, so too will L2 automation.  And all these vehicles will require driver monitoring that makes sure the driver is attentive and engaged in the driving task. 

Some might ask what the point of an automated vehicle if you must maintain attention and engagement?  Well, unless you have experienced L2 automation on a decent stretch of highway, you may never understand the benefits from a comfort and safety standpoint.

The Emerging AV Eco-System -- Who Will Drive Automated Mobility on Demand?

Hypothetically, if cars were being invented today they would have no human behind the wheel!  Take the driver out of the loop and you will solve 90% of traffic fatalities experts claim. A fully automated city is quieter, cleaner, safer and more efficient! What’s not to like?

But solving this problem it not easy.  Automation requires the most advanced deployment of embedded systems backed by a network of services performing a host of functions necessary to support fleets of vehicles. The end game is to maintain the performance and safety of the vehicle grid coupled with infrastructure and services necessary to support automated mobility on demand.    

Automation is worth trillions when you look at the big picture.  Literally every company from technology, transportation, telecommunications, to logistics, data center, IT and commerce are looking to capture a piece of this enormous opportunity. 

While everyone is familiar with the advances being made by the big guys like Waymo (Google), Uber, and GM, there are literally hundreds of other companies creating solutions and IP to serve automated vehicle technologies.

Many of these solutions come from start-ups, some spun out of university projects related to artificial intelligence, localization or data sciences.  Fueled by an endless supply of investment capital and venture funds, the eco-system for automated driving is on fire. But the movement to automated mobility extends well beyond the vehicles.  

Automated mobility brings with it massive technology innovation and changing consumer expectations that will spill over to many sectors besides traditional automotive: insurance, finance, infrastructure, energy and technology players need to alter their business models.  Meanwhile, governments and regulators will face significant decisions and choices around how they incent, standardize, regulate and secure the emerging mobility environment.

The Extended AV Eco-System 

The composition of companies competing in the AV Eco-System is very diverse as pointed out in the following chart which is divided into four quadrants.  The lower left is where the vehicles and associated robot technologies reside.  

On the upper left side of the chart are the data center assets to support the entire pipeline of data coming into and out of the vehicle. From server farms to telecom this area handles the data flow. 

The upper right side of the chart represent the services necessary to support automation and includes mobility services, commerce, localization services, app services, and more. The lower right side represents functions such as fleet management, supervision, monitoring, and public transport.


The Disruption Begins

Within the passenger car segment, the development of automated vehicles is following two different trajectories. Incremental automation is emerging now with many vehicles offering level 2 automation built on top of safety systems. The consumer passenger car segment will continue down this path for years to come before highly automated vehicle (L4+) starts putting a dent into traditional ownership models. 


Automated Mobility on Demand

Automated Mobility on Demand (AMoD) is a new acronym that pertains to robo-taxis and better details the AVs within the context of Transportation as a Service (TaaS), Mobility as a Service (MaaS) or Mobility on Demand (MoD). AMoD specifically pertains to robo-taxi's operating as a service and operated by Transportation Network Companies (TNCs). 


When traditional ownership begins to transition to AMoD, you will have fleet operators as the new owners of vehicle assets.  Under this model, traditional auto will serve fleet operators who eventually become the primary customers of the vehicles. Fleet operators become the new buyers of thousands of vehicles and will have leverage to spec out vehicles that best meet their requirements. 

Traditional auto suppliers including tier ones are vying for position as well. Their knowhow in systems and integration bodes well for automated vehicles. Advances in mechatronics, redundant systems, and fail-safe designs will drive their business going forward.  Vehicles deployed for automated mobility on demand will be a different kind of car. Strong, sturdy, and redundant resulting in vehicles that will run 20+ hours a day. 

Automated Mobility on Demand requires massive processing resources that are distributed throughout the vehicle. This is understood, but the processing architectures necessary to handle AI are on an order of magnitude greater than traditional approaches. Here, the big processor names are highly committed to developing the best AI architectures, but dozens of others are vying to create more powerful and more efficient AI accelerators. 

Better Sensors… more data 

Creating a perfect environmental model is vital for proper automation but no sensor is perfect. Lidar is the best at measuring the world in 3D, but Lidar is expensive and is not all-weather. Radar is cheaper than Lidar and getting better at detection of non-metallic actors but still suffers from noise. Camera is best at detection but suffers at judging distances and movements. Furthermore, camera performance declines in inclement weather. 

Building the best sensor package will largely depend on the application. Level 4 applications for commercial fleet operators will require a full suite of sensors and as well as localization assets and precision mapping.  The entire stack of hardware and software components for Level 4 vehicles will be worth substantially more than typical consumer vehicles with automated features. 

More sensors mean more data and processing in real time and this is taxing to the AV stack. Furthermore, processing RAW sensor data is better suited for AI applications. On the other hand, processing on the edge (or in the sensor) may be more efficient for the AV stack since objects are classified and labeled before the come into the fusion pipeline. 

The Eco-System Outside the Car

Getting to new mobility is going to take much more than outfitting vehicles with dozens of sensors and loads of intelligence. Making cars self-aware to the point of operating autonomously is largely been solved. But in the context of mobility services, there remains many gaps between the automated driving systems themselves and the digital infrastructure necessary to support fleets of automated vehicles.

Data center assets are going to be vital to support the eco-system outside of the vehicles. Therefore, large web-based storage and compute services are going to be very important to support the services and functions associated with Automated Mobility on Demand.  

There are also a host of other constituents that will play a role in mobility services. From fleet management and maintenance, to support of services and transactions there are dozens of players that will be necessary to support both public and private interests.    

To support this new eco-system outside the car there needs to be an orchestration layer to accommodate the data and services that serves the fleets. Furthermore, this orchestration layer will require an open interface specification that facilitates outside services and players.  Otherwise, the industry will remain vertically integrated and we don't believe that model will prevail long term.    

Safety & Control  

Safety and control is another big challenge to those building autonomous vehicles as you need a fallback in case things go wrong. The underpinning of an autonomous control system is a fault tolerant real-time operating system.  Within this context some domains will be virtualized to optimize the computing architectures, while others will run in lockstep to reduce the chances of failure. 

In the context of robotaxis, there is another level of safety that is outside the car and this is vital to Automated Mobility on Demand.  Fleets of automated vehicles will require teleoperation and remote monitoring.  Teleoperation will be based on monitoring services that optimize flow and traffic, or provide fall back when conditions in the environment are unpredictable, from storms, or natural disasters. 

There is also an increasingly level of cyber security necessary to support Automated Mobility on Demand.  Traditional vehicles don’t share the same threat level as highly automated vehicles but once fleets of vehicles become abundant so too do the threat levels. This is especially true with a services oriented architecture (SoA) where data is coming into and going out of the vehicles regularly.  Furthermore, teleoperation becomes a critical threat area since an outside services provider has the authority to reach deep into the control commands or take over control entirely. The industry is going to require the most sophisticated application of security when highly automated vehicles reach critical mass.  

The Control Side of Automation

Developing automated vehicles is challenging task to say the least. Fundamentally, you have perception, decision and control. Each of these carries with it a host of challenging tasks while the end result is a near perfect implementation of a safe and comfortable experience. The purpose of this write up is to address what is necessary on the control side of things to deliver safety and comfort.  

  • By-wire control systems are necessary to provide a base platform than can accommodate automated driving. But even if by-wire is available, we still need to come up with a control algorithm that can deliver a safe and comfortable ride.
  • Adjusting and calibrating a PID control algorithm is a timely task because much depends on the physics of the vehicle and understanding all the calibration measures between the computer output and the control systems input. 

We are going to start with the basic and most fundamental element of automated driving and that is “control” using by-wire signalling necessary for longitudinal and lateral control of the vehicle. But how are we going to transfer our control signals from the computer to the activators?  

Furthermore, for control to work smoothly and safely you will need a control algorithm that is most likely PID or some variation of it. PID Theory is all about adjusting the gains necessary to deliver safe and smooth performance. 

For automated vehicle functions it is imperative to have by-wire control of throttle, steering and braking. In other words, you need digital signal control and servo actuation that require no human input at all.  Off road applications may retrofit manned vehicles with steering column actuators or pedal actuators but in the passenger car segment this is not practical for anything beyond very early stage development activities. (such as early DARPA challenges). 

Most modern production cars already have by-wire control of engine management. This started more than 15 years ago and it is an extension of EFI (electronic fuel injection). On the other hand, steer-by-wire technologies are rare on production vehicles. And while many vehicles have a power assistance feature applied to steering that uses a traditional power steering system consisting of a hydraulic steering rack fed by an engine driven hydraulic power steering pump.  

Electric Power Assisted Steering (EPAS) was a key development in the move to steer-by-wire. EPAS eliminates the hydraulic pump and replaces it with an electronic motor. In theory, a ESAP system could replace the steering column completely, but steering columns are a necessary safety element in the event of a failure of the EPAS system.  Often, but not always, a car’s ESAP system has enough torque to handle all steering activities in applications such as automatic parking or active lane keeping technologies. It is in these systems where L2+ automated driving applications could be applied.  

Meanwhile, brake-by-wire is the more complex by-wire application. The traditional brake system has a direct mechanical link between the brake pedal and the brake master cylinder.  Often a vacuum powered booster is employed to help force the hydraulic fluid from the master cylinder to each brake caliper or wheel cylinder to stop the vehicle.   A brake-by-wire system may pressurize the brakes system with a hydraulic pump or entirely do away with the hydraulic system and operate each wheel's brake caliper electrically. 

Most modern production cars don’t have brake-by-wire systems, however, hybrid and plug-in electric vehicles do. This is one reason hybrid or EVs are often the chosen base platform for automated vehicle development. Alternatively, you can add a brake-by-wire system found on some hybrid vehicles. The actuators from these hybrid vehicles can be added in-line to a car with a traditional brake system to control brake pressure.

Controlling the Vehicle: Calibration & Signal Management

On the control side of automated vehicles, you have a host of challenges associated with delivering the proper commands for safe and smooth operation.  Fundamental to control is PID Theory. 

A proportional–integral–derivative controller (PID controller or three term controller) is a control loop feedback mechanism (controller) widely used in industrial control systems and a variety of other applications requiring continuously modulated control. A PID controller continuously calculates an error value as the difference between a desired set point and a measured process variable and applies a correction based on proportional, integral, and derivative terms (sometimes denoted P, I, and D respectively) which give their name to the controller type.

In practical terms, PID automatically applies accurate and responsive correction to control functions to maintain proper and smooth operation. Without PID you would have a very jerky ride! 

Once the PID is set up, there is a constant multiplier that acts on each of the PID gains. Then, to create smooth and responsive control, we adjust these multipliers… this is called PID tuning. We start by increasing the P gain and leaving the I and D at 0, then after that we carefully increase the I gain and then the D gain. Once tuning is complete we have responsive, fast, and accurate, smooth control.

Here is a really good video on understanding PID control for automated vehicles.

Calibration: a big part of calibration is PID tuning

Another part of calibration is knowing what value the computer uses and matching that with an applied (real-life) value.   For example, if our computer sends a steering angle of 300, we need to know what angle this will be in degrees on the actual vehicle. 

Another example, if the computer is sending a throttle value of 15,000 to the throttle actuator, we need to know about what vehicle speed this will bring us to.  Also, we need to find minimum and maximum values for each control, to make sure we do not send values could damage or fault the actuators, but also for safety reasons we need to make sure we do not slam on the brakes too hard, or torque the steering wheel too hard.

Calibration of Steering Angles.jpg


Developing automated vehicles is challenging task to say the least. There are many tough problems that need to be solved before a safe and comfortable ride can be delivered. 

By-wire control systems are necessary to provide a base platform than can accommodate automated driving. But even if by-wire is available you still need to come up with a control algorithm than can deliver a safe and comfortable ride. Adjusting and calibrating the PID control algorithm is a timely task because much depends on the physics of the vehicle and understanding all the calibration measures between the computer output and control systems input. 

Some elements of by-wire control is more challenging than others. For example, just because a car has electronic power steering does necessarily mean it is suitable for self-driving cars. Furthermore, take out the steering wheel altogether and you will need a redundant EPAS system that can cover for a failure in the primary unit. 

If you are interested in following the VSI build-out of automation you might be interested in VSI Pro, this is our newest subscription service which documents the total build in detail along with sample code bases, fixes to problems, challenges and so on.  Furthermore, throughout the build we will document the functional performance of all our enabling partners. This includes sensor packages, ECUs, localization assets and more.  VSI Pro is a service designed for R&D departments working on autonomous vehicles.  VSI Pro delivers practical advice and reviews of common enabling technologies, how well they performed, and where the gaps are.

The Internet of Cars

Autonomous Car Illustration.jpg

Highly automated driving will be dependent on the cloud for a variety of reasons.  The cloud will become the “glue” that holds everything together.  This new and emerging “internet of vehicles” will be a distributed transportation fabric capable of making its own decisions that control the movement and trajectory of cars. The emerging platforms must manage seas of vehicles operating in full autonomy, with a safe harbor plan and backed by a highly-secured network.

Whether you are talking about Google, Baidu, or Apple it’s not hard to envision automated vehicle functions that are highly controlled by the assets and infrastructure that these companies have or are developing. Their respective assets will become a vital element that support the complexities of automated driving. VSI believes advanced cloud assets will vastly improve the safety and performance of automated vehicles.

These big tech companies have capacity [and scale] to extend their mobile eco-system to automated driving. And as it turns out the dynamics are very similar. A mobile eco-system for automated driving will include a collection of third parties with the services to support automated driving and a lot of things tangentially related to automated driving such as smart cities.  

The benefits of a cloud based eco-system make even more sense when thinking all the software and services that will be necessary to support performance and safety.  For example, highly automated driving becomes very dependent on geo-coded content for lane level guidance as well as ground truth.  You also have the transient conditions that are necessary to support the dynamic events like potholes, lane closures and reroutes.

Furthermore, just like with mobile, automated vehicles will require device management and in this case the device happens to be a car.  In fact, it is likely that the vehicle and mobile eco-systems blend. After all it is all about mobility services.

Automated vehicles will be fully connected as connectivity will not be an option.  The cars and fleets will be constantly talking to network, especially with respect to the real time driving requirements. Sensor data will be aggregated and passed on to other cars to insure safe and structured movement within the roadway grid. Furthermore, the software in the vehicle is constantly being updated with firmware updates much like your phone and laptop is doing now.

The Internet of Vehicles Chart.jpg

Open Model

While it is certain that controlling highly automated vehicles will be managed by vast cloud assets it is not known if this will be an open specification like Baidu recently announced. Baidu is essentially modeling this after their smartphone business by creating an eco-system that helps Chinese car makers, developers and other constituents jump on board.

For Baidu the approached is pretty shrewd. Besides device management, there other factors that favor Baidu’s open stack approach. Obviously, the location assets are critical and their ability to crowd source content from other devices assures that the Map can maintain itself through crowd sourcing.  Data collection is another thing that companies like Baidu and Google can do much better that others. They already own those assets so the infrastructure is in place for this to happen. Another area is their respective work in artificial intelligence. As we know this takes lots of data, and again, it takes aggregation methods and infrastructure to support this.

We already know about the importance of localization assets. Products like HD Maps, Roadbook and Road DNA provide meta data that can not only help a vehicle understand where it should and should not be, but also can be a container for fail safe zones that vastly improve the safety envelope of the vehicle itself. Even lower level automation systems like GM’s Super Cruise relay on the cloud for localization assets that increase the safety.   

Recently Baidu announced the Apollo project which is a full AV Stack that includes a hardware reference design along with the software assets to deploy fully automated driving features.  The Baidu plan is not about building a car or anything to do with it. Rather, it is an open reference design and all the big data and services that go with it. 

Even though Google shows limited evidence of doing what Baidu is doing, we believe Google’s long term end game is just that. Controlling the internet of cars. For Apple, Baidu and Google, it all about mobile devices, and in this case the car is another device. These companies have also been developing their AV Stack for some time but Google’s Waymo has said nothing about offering up their code base for anyone to use. Of course, Google has been building this stack for some time and trying to perfect it.  Similarly, Apple is doing the same thing.  Arguably, Google has gained more experience in automated driving than either Baidu or Apple.

For Baidu an open source stack may stimulate the take up of automation in China as there are many OEMs and could enable automated features much more quickly that developing themselves. They also have a bit of a captive market because of their size – similar to mobile. China is ripe for this approach because there are lots of local Chinese auto brands that could quickly offer automated features. And China is huge. It still a level playing field, even with Baidu supplying the automation stack.

On the other hand, the rest of the world is a bit of a question. American and European OEMS don’t want to give up that much control.  Furthermore, they already have millions of dollars invested in their own AV technology. 

Long term, when we have fleets of vehicles operating autonomously, it is entirely possible that the internet of vehicles will be controlled by a handful of dominating companies just like mobile computing is now.

And while autonomous vehicles will be fully capable of operating without a real-time connection, they could not survive without it. 

Automotive is and will remain an edge computing device. There would be too much latency to rely on the cloud the control the movement in real time. Furthermore, the data coming out of autonomous vehicle is to voluminous to be done in raw formats. Lots of processing within the vehicle and lots of metadata to the cloud. The cloud will rely on the metadata to manage a variety of systems and software as well as a safe-fail plan which is very dynamic.


So, the billion-dollar question remains, will automation ultimately fall into the hands of a few tech heavyweights? At the end of day, it may come to this but it is going to take a while.

The era of fully automated driving is going to cause major distribution to a lot of industries… not only auto, but many other elements of mobility services, transportation, insurance, urban planning, smart cities, etc.

At the end of the day there is still room for traditional automotive and supplier to stay relevant. If the cloud assets and the AV stack become the property of big tech so be it. It is becoming increasingly clear that big tech does not want to have much to do with the car itself. Leave that to the OEMs. There will still be plenty of differentiation in the platforms to meet the many different use cases.

Eventually ownership models start to change and this too will have an impact. But here again, while the model will change the demand for vehicular transportation will not.  Longer term you have the concept of teleportation but I am not about to even go there because I cannot wrap my head around it!  

The AV Stack – A look at key enablers for automated vehicles

The race towards automated vehicle technology moved into high gear recently when Intel announced a bid to acquire Mobileye for $15 billion.   

Intel has recognized the growth opportunities in automated vehicle technologies and has assembled a group of core assets necessary to build out the “AV Stack.”  The AV Stack will consist of multiple domains consolidated into a platform that can handle end-to-end automation. This includes perception, data fusion, cloud/OTA, localization, behavior (a.k.a. driving policy), control and safety. 

The AV Stack is also known as “domain control” although some have begun to label it multi-domain control since it consolidates the functions of many domains into one. The trend towards domain consolation has been going on for a while although there has been a slow movement. However, automation tends to push for this architecture because it would not be efficient to do it in a highly-distributed way.  

Domain control also makes sense from a middleware standpoint. Virtualization of processes is now possible through the OS Stack where you can isolate safety critical functions from non-safety critical functions.   Furthermore, middleware abstraction layers enable developers to write to a common interface specification and not have to worry about RTE and BSW components.  

The AV Stack is really the brains behind autonomous cars including all supporting tasks such as communications, data management, fail safe as well as the middleware and software applications. It is a collection of hardware and software components that are tightly integrated into a domain controller and will be the basis for Automated Systems Level 3 and higher.  

The AV Stack represents the greatest collection of advanced IP content in future cars and is a big opportunity for suppliers that have the capacity to string it all together. 

copyright 2017 -- VSI Labs

copyright 2017 -- VSI Labs

For suppliers of automotive processors developing an AV Stack is the right thing to do assuming you are targeting vehicle automation. Most of the leading suppliers of processors for automotive are already doing this to some capacity. NXP, Renesas, TI, Intel and Nvidia all have development kits that support multiple nodes in the AV value chain. 

You also have tier-one suppliers getting into this space on the premise that processor companies don't necessarily have all the knowhow to build-out a full ECU domain.  Recently, Nvidia has done deals with both ZF and Bosch which are along these lines.  Delphi is active with their CSLP platform and counts Mobileye and Intel as their partners for processing logic. 

Another player in the space is TTTech, an Austrian firm that specializes in ECU technologies and is a major partner in Audi’s Zfas controller.  TTTech’s approach is supported by Renesas processors as well as application development framework called TTA Integration.  


It is not that easy to estimate the total available market (TAM) for the AV Stack because the take rate for Level-3 automation (or higher) will be gradual at first. You also have so many supporting domains and licensed IP from third parties. You have multicore architectures, co-processors, lots of memory, a communications stack, and lots and lots of firmware. 

The AV Stack is probably worth at least $10,000 (ASP) if you include the sensors. Within the context of future mobility the AV Stack is the highest concentration of value and probably becomes the single most valuable piece of future vehicles.

Tesla’s Model S: Key Observations About Autopilot & OTA

Tesla Model S.jpg

VSI recently rented a Tesla Model S to examine the functionality of Autopilot as well as gain a deeper understanding of the overall architecture of the vehicle.

The vehicle we had access to was a 2015 Model P90D and configured with Autopilot 1.0 V8 which is Level 2 automation.  As a research company, VSI has been examining the building blocks of automation for nearly three years and are very familiar with the technologies that are used in the Tesla Model S.

What makes the Tesla Model S so interesting?

  • The over-the-air (OTA) digital communications of the Tesla Model S is by far the most interesting element of this vehicle and is probably the most critical element of the vehicle architecture. 
  • This vehicle talks a lot to the network and most of it done over Wi-Fi as we found out. Within a 24-hour period this vehicle exchanged over 50MB of data with the Tesla’s Mothership, a virtual private network (VPN) that manages the data exchange. About 30% of that data is flowing out of the vehicle.  
  • There have been multiple updates to Autopilot over the past few months, particularly with regards to v8.0 (rev. 2.52.22) where vast improvements were made to the performance of the radar.  Further improvements have been made to enable fleet learning, and is likely the reason the volume of data exchange is so high.
  • v8.0 accesses more raw data from the front-facing radar and new Tesla software processes those inputs in enhanced ways.
  • Architecturally, the Tesla E/E systems rely a lot on the main media unit which manages all communications and HMI elements. The consolidation of so many functions into a single domain is remarkable.  Many of the Autopilot calculations are made on the main media unit plus another control ECU separate from the main media unit. The vehicle camera module have their own processing so they take some load off the main media unit.
  • We think the Model S is a proxy for future vehicle architectures, at least those with partial automation features. And again, we think the OTA capabilities of this vehicle is the most important element of the vehicle architecture. This becomes more obvious when you visit a Tesla vehicle center where service bays are less than traditional vehicles. Short of mechanical failures, this vehicle is repairable over the network!      
Tesla Cluster.jpg

Autopilot 1.0 (VSI Profile

Tesla’s Tech Package with Autopilot costs $4,250 and it enabled through an over-the-air update. The current system consists of a forward-looking camera, a radar, and (12) 360-degree ultrasonic sensors.

  • The camera-based sensor comes from Mobileye (camera & EyeQ3 SoC.) – this is a single monochromatic camera.  However, fallout from a May 7 fatal accident led to a split between Tesla and Mobileye.  Mobileye will not supply hardware/software to Tesla beyond the EyeQ3 or beyond the current production cycles.
  • Bosch supplies the radar sensor/module. Autopilot v8.0 will have access to six times as many radar objects with the same hardware with a lot more information per object. Radar captures data cycles at 10 times per second. By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision. The radar also has the ability to look ahead of vehicles it is tracking and spot potential threat before the driver can.

Control Domain

  • Perception and control is enabled through the Nvidia Tegra X1 processor.
  • Tesla provided their own self-driving control algorithms and some of the software algorithms fusing radar and camera data. 

HMI Domain

The Model S works with two tracking mechanisms:

  • Locking onto the car ahead or sighting the lane marks. When there’s difficulty reading the road, a “Hold Steering Wheel” advisory appears. If lane keeping is interrupted, a black wheel gripped by red hands and a “Take Over Immediately” message appear on the dash. Failing to heed these suggestions cues chimes, and if you ignore all the audible and visible warnings, the Model S grinds to a halt and flashes its hazards. A heartbeat detector is not included.
  • A thin control stalk tucked behind the left steering-wheel commands the cruise-control speed (up or down clicks), the interval to the car ahead (twist of an end switch), and Autosteer (Beta) initiation (two quick pulls back). A chime signals activation, and the cluster displays various pieces of information: the car ahead, if it’s within radar range, and lane marks, illuminated when in use for guidance. A steering-wheel symbol glows blue when your steering input is no longer needed, and ­Tesla’s gauge cluster also displays the speed limit and your cruise-control setting. 

The Model S is considered Level 2 but will change lanes upon command via a flick of the turn signal stalk (Auto Lane Change). To move two lanes, you must signal that desire with two separate flicks of the stalk. This function also can be used on freeway entrance and exit ramp.

Autopilot Software v8.0 (rev. 2.52.22) will warn drivers if they're not engaged with their hand on the wheel (for 1 minute if not following a car, 3 minutes if following another car.)

If a driver ignores 3 audible warnings within an hour, Autopilot v8.0 will disengage until the car has been parked.

Autopilot 2.0 (VSI Profile)

Although not tested yet, it is important for us to explain Tesla’s newer Autopilot 2.0. Eventually we will test this once the software functionality is more complete. At the moment, Autopilot 2.0 is less capable than Autopilot 1.0 because data is being collected via shadow mode to validate the performance of the advanced features.  

Tesla’s new Autopilot 2.0 hardware suite ('Hardware 2' or 'HW2') consists of 8 cameras, 1 radar, ultrasonic sensors and a new Nvidia supercomputer to support its “Tesla Vision” Tesla’s new end-to-end image processing software and neural net.  Available today on Tesla Model S and X and will be available on Tesla Model 3, the new Autopilot consists of the following:

  • Cameras: Three forward facing cameras (Main, Wide, Narrow), 2 side cameras in the B-pillar, rear camera above the license plate, left-rear and right-rear facing cameras.
  • Processor: Nvidia Drive PX 2 capable of 12 trillion operations per second. This is 40 times the processing power of 1.0 Teslas. 
  • Sonar: 12 Ultrasonic Sensors capable of 8 meters
  • GPS and IMU
  • Radar: Forward Facing Radar
  • Software: Tesla Vision that uses a deep neural networks developed in-house by Tesla

Enhanced Autopilot - $5,000 at vehicle purchase, $6,000 later - The vehicle will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near, self-park when near a parking spot and be summoned to and from your garage. Tesla’s Enhanced Autopilot software is expected to complete validation and be rolled out to your car via an over-the-air update in December 2016, subject to regulatory approval.

Full Self-Driving Capability - $8,000 at purchase or $10,000 later -  This doubles the number of active cameras from four to eight, enabling full self-driving in almost all circumstances, at what Tesla claims will be a probability of safety at least twice as good as the average human driver. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. All the user needs to do is get in and tell their car where to go. The autopilot system will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. This feature is expected to roll-out by the end of 2017.


Tesla is by far the most important production car and is a proxy for future passenger cars.  The software enablement through over-the-air updating is the most striking differentiator in our opinion. The rate in which new features, updates and patches are deployed is astonishing. The volume of data is also a good indicator of what the requirements of cloud connected car should be.

Although not relevant so much for the purposes of VSI, it should be mentioned that the fit and finish of this vehicle is sub-par when compared to German vehicles. This is especially true for the interior components with the exception of the center stack, which is outstanding in quality and functionality. The same can be said about the instrument cluster, as would be expected in a digital vehicle like this.

The infotainment systems in this vehicle is enhanced by the large display but much more intuitive than most conventional vehicles. There is no switchgear in this vehicle at all exceptt for steering wheel stalks and controls.   

Performance is another key attribute of this vehicle. Power management and battery management are outstanding for this vehicle and can be attributed to the all-electronic powertrain as well as the ability to update the power management software via OTA.

Autopilot works very well and gets better all the time. Especially v8.0 where enhancements made to the sensor performance as well as the reduction of false positives are critical.  The self-learning capabilities are reflected in the amount of data that is now exchanged between the mothership and the car itself.

In normal driving modes the Tesla Model S is very tight and performance oriented. Handling is surprisingly good for a vehicle that weighs nearly 5,000 pounds. Acceleration is outstanding and rivals or exceeds most high-end performance (internal combustion) sedans. Braking is also very good, in part enhanced by the regenerative braking that feels like engine brake on conventional vehicles.   

Autopilot HW2 (v8.1) will undoubtedly continue the path that Tesla is on. We don’t have any reason to doubt Tesla’s abilities to realize full automation with the new hardware platform.   


Understanding Operational Design Domains

Safety Assessment Letter.jpg

NHTSA’s HAV policy, published in September 2016, is a regulatory framework and best practices for the safe design, development, testing, and deployment of HAVs for manufacturers and all entities involved.  

Any company which plans to test or deploy highly automated vehicles on public roads in the United States is required to submit a “Safety Assessment Letter” to the NHTSA’s Office of the Chief Counsel. The NHTSA’s guideline for automated vehicle development calls for many items to be detailed in the letter, regarding whether the company is meeting this guidance.

Among others, defining driving scenarios is the critical first step for OEMs, tier ones and other technology companies that want their HAVs to be out on the road. The definition of where (such as roadway types, roadway speeds, etc.) and when (under what conditions such as day/night, normal or work zone, etc.) an HAV is designed to operate is required to be described in detail in the letter.

Operational Design Domains.jpg

To realize such scenarios, core functional requirements that would be enabled by perception, processing and control domain technologies as well as safety monitors, whose systems should be rigorously tested, simulated and validated need to be defined.

Such processes documented in NHTSA’s “Guidance Specific to Each HAV System” within the “Framework for Vehicle Performance Guidance” fall into four parts – 1) ODD, 2) OEDR, 3) Fall Back and 4) Testing/ Validation/ Simulation. Below is VSI’s understanding and guidance to the key tasks related to each part in developing and designing HAVs.

  • A vehicle with automated features must have established an Operational Design Domain (ODD). This is a requirement and core initial element for the letter. A SAE Level 2, 3 or 4 vehicle could have one or multiple systems, one for each ODD (e.g., freeway driving, self-parking, geo-fenced urban driving, etc.). 

The key task here is to define the various conditions and “scenarios” (ODD) that would be able to detect and respond to a variety of normal and unexpected objects and events (OEDR), and even to fall back to a minimal risk condition in the case of system failure (Fall Back)

  • A well-defined ODD is necessary to determine what OEDR (Object and Event Detection and Response) capabilities are required for the HAV to safely operate within the intended domain. OEDR requirements are derived from an evaluation of normal driving scenarios, expected hazards (e.g., other vehicles, pedestrians), and unspecified events (e.g., emergency vehicles, temporary construction zones) that could occur within the operational domain. 

The key task here is defining the “functional requirements” as well as the “enabling technologies” (perception, driving policy and control) per scenario defined in ODD.

  • Manufacturers and other entities should have a documented process for assessment, testing, and validation of their Fall Backapproaches to ensure that the vehicle can be put in a minimal risk condition in cases of HAV system failure or a failure in a human driver’s response when transitioning from automated to manual control.

The key task here is defining what the fall back strategy should be and how companies should go about achieving it. A fall Back “system” should be part of a HAV system, which operates specifically in a condition of system failure (especially in L4 automation where the driver is out of the loop). System failure is another “condition” within ODD where you need to design system architecture, accommodating fail-operational or fail-over Fall Back safety system. On the other hand, OEDR functional requirements are coming from outside the vehicle and are coping with environmental “conditions,” whether they are predictable or not. 

VSI believes and that HAVs will come to rely on AI-based systems to cope with “reasoning” that will become necessary for vehicles to handle edge cases. In L4 vehicles you may have a rule-based, deterministic, deductive system complemented by a probabilistic, AI-based, inductive system to enable full/fail-operational (as opposed to L3 fail-over which goes back to the driver) automated driving systems in all-driving scenarios. 

When using a probabilistic model, it is important to use a large dataset that includes a wide variety of data and many types of environments to improve the performance of the AI system. It is quite challenging for these AI modules to go through validation in performance/safety even if the accuracy is very high. A common practice to give the AI modules some credibility is to do extensive testing via simulations, test tracks, and real world testing, etc.  However, ultimately, it may be difficult to assign a high ASIL rating to an AI-based system despite favorable outcome-based validation. 

Considering that an AI-based system will be difficult to assign a high ASIL rating because of its limited traceability, there is a growing school of thought that an approach to coping with low-ASIL rated probabilistic algorithms like AI is to pair them with a high-ASIL rated deductively-based system that will monitor the probabilistic system and the decisions it makes [safety monitor system]. 

On the other hand, the deductive system is not capable of full driving/navigation but only capable as a fail-over system that safely shuts down the system (pulling over/ coming to a stop/ or just continuing to safely follow the lane). For AI to be deployed in a pragmatic way, there will still be traditional deterministic approaches to collecting, processing and preparing data for input into the AI system. On the back side, you have deterministic systems that compare the output data from the AI system. This provides a safety net layer for the AI based autonomous control system that is probabilistic in nature.

  • Autonomous Vehicle Testing, Validation and Simulation: Software testing is all too often simply a bug hunt rather than a well-considered exercise in ensuring quality, according to Philip Koopman, of Edge Case Research. There are challenges that await developers who are attempting to qualify fully autonomous, NHTSA Level 4 vehicles for large-scale deployment. We also need to consider how such vehicles might be designed and validated within the ISO 26262 V framework. The reason for this constraint is that this is an acceptable practice for ensuring safety. It is a well-established safety principle that computer-based systems should be considered unsafe unless convincingly argued otherwise.

The key task here is understanding best practices/solutions for the test and validation of the HAV system. It is widely known that the development done within the traditional V-model is highly relevant for many of the systems and components used. However, system level performance would likely include simulation as well as outcome-based validation, on the premise that real road testing could not be complete enough for edge cases (infeasibility of complete testing). It is impractical to develop and deploy an autonomous vehicle that will handle every possible combination of scenarios in an unrestricted real-world environment. Therefore, it is critical to find unique testing and simulation tool companies in the process of developing HAV scenarios.

There are a couple of companies that are stepping up to offer solutions and knowhow for this complex software developmentissue especially in simulation techniques.

  • Ricardo is leveraging agent-based modeling (ABM) simulation methodologies to support advanced testing and analysis of autonomous vehicle performance. The approach combines agents (vehicles, people or infrastructure) with specific behaviors (selfishness, aggression) and connects them to a defined environment (cities or test tracks) to understand emergent behaviors during a simulation. The practice is used to recreate real-world driving scenarios in a virtual environment to test complex driving scenarios.
  • Edge Case Research is developing an automated software robustness testing tool that prioritizes tests that are most likely to find safety hazards. Scalable testing tools give developers the feedback they need early in development, so that they can get on the road more quickly with safer, more robust vehicles.
  • All of driving simulation and test methods require the generation of test scenarios for which the systems are to be tested. Vertizan developed constrained randomization test automation tool, Vitaq, to automatically create requited test scenarios for testing ADAS and autonomous systems in a driving simulator setup. The constrained randomization is deployed at two levels: 1) static randomization 2) dynamic randomization. Static randomization is used to automatically create the base scenario w.r.t., possible path trajectories of vehicles, environment variables and traffic variables. Dynamic randomization is achieved by real-time communication between the driving simulator and the constrained randomization tool via a TCP/IP HiL interface (client-server interface). Constrained randomization is then used to intelligently explore the possible sample space to find the corner cases for which an ADAS or an autonomous system may fail.


Developing autonomous vehicles and the deployment of them presents challenges to the companies that are designing, developing or deploying them. It also presents challenges to the governing bodies who must ensure the safety of these technologies.

To create a framework for this, NHTSA recently established requirements as per the Safety Assessment Letter that is essentially a detailed document that covers many areas of interest. The most significant and challenging element of the requirement is defining the Operational Design Domain (ODD). The ODD is the definition of where (such as what roadway types, roadway speeds, etc.) and when (under what conditions, such as day/night, normal or work zone, etc.) an HAV is designed to operate.