The Road to Level-5 Autonomous Vehicles
Design Trends and the Transformation of Everything
Read the full Design Trends eZine:
By Sudha Jamthe for Mouser Electronics
Vehicle Automation at a Glance
The Society of Automotive Engineers’ system for rating vehicle automation is based on the following scale:
- Level 0: No automation—humans fully control vehicles
- Level 1: Driver assistance (such as cruise control)—humans perform driving tasks
- Level 2: Partial automation— humans select multiple functions for execution, such as steering and
speed
- Level 3: Conditional automation— systems monitor a driving environment (e.g., Tesla running on
autopilot exhibits level 2 or 3)
- Level 4: High automation— systems drive a vehicle in all conditions, but a human driver can intervene
if necessary
- Level 5: Full automation— systems drive a vehicle with no human driver and no human driver controls
Autonomous driving technology is advancing rapidly, with all the major players testing level-4 vehicles. Most
expect to test level-5 vehicles on public roads within the next few years, and this activity has spawned its own
ecosystem of specialized sensors, power systems, data processing, and software.
Most autonomous vehicle (AV) players today say that they’re testing vehicles that are at level 4 (based on
the Society of Automotive Engineers zero-to-five scale). That means these vehicles are technically able to
operate without human input, but humans can take over if the AV gets into trouble. True, these vehicles
typically run in limited geographies restricted by geofencing, but this feat is still an impressive
accomplishment and the cars keep getting better. All the players in this space are striving for fully autonomous
level-5 vehicles. Waymo, Google’s AV project, is already preparing to test level-4 vehicles without a
human backup driver on public roads in California.
Essential Capabilities
To understand what has been accomplished and what still needs to be done, consider these four essential
capabilities a self-driving car must have to function without human input:
Localization
An AV must always know where it is relative to its known world. It must know its starting point, whether to turn,
how to navigate around an obstacle, and where to detour when a road is closed. If the vehicle shuts down for a
time, it must be able to put its current location on the “map.” Notice that I say it must localize
itself in its known world: This is because AVs are not likely to know the entire world. Level-4 AV testing
typically begins with a restricted area. As the vehicle gets better at negotiating this area, testers expand its
routes. When AVs become commercially viable, many will function in limited ranges and routes. Regardless of the
size and shape of an AV’s world, the vehicle must always be able to localize itself at all times.
Perception
An AV must be able to perceive the physical world around it, including recognizing roads, lanes, traffic signs
and lights, other vehicles, pedestrians, obstacles, and all the things humans see and process as we drive. The
most common AV sensor technologies include video cameras, stereo video, radar, and light detection and ranging
(LIDAR)—a pulsed laser technology used for object detection and avoidance. Currently, no standard approach
exists for the perception layer. Many players are testing various combinations of sensor arrays, and cost is
obviously a factor. Whatever sensor array an AV has, it must be able to stitch together all the data from all
those sensors to build a detailed and complete local map of its surroundings, which updates continuously in real
time as the vehicle moves.
Key Facts
- In March 2018, the California Department of Motor Vehicles (DMV) gave approval to 52
companies to test autonomous vehicles on California roads and highways.
- In November 2018, Waymo received the first permit ever from the California DMB to road-test
a level-5 autonomous vehicle (AV) with no human at the controls.
- It’s estimated that in 2025, 8 million level-3 or higher AVs will be sold.
- Waymo AVs have logged more than 5 million road miles in 25 cities.
Path Planning
An AV must be able to make every navigational and operational decision in every moment as it’s traveling.
This decision-making goes beyond getting from point A to point B in its known world. The vehicle must also make
decisions about what to do next if it stops, when it should speed up or slow down or change lanes, how to
execute turns, what to do if it encounters an obstacle, and what to do if it encounters something it has never
experienced before. Of course, it must be able to decide when to leave point A and what to do when it reaches
point B.
Control
An AV must be able to reliably and accurately operate the proper sequence of driver controls to empower it to
perform essential operations. If the AV decides it needs to go around an obstacle, and doing so requires a lane
change, it must use its sensor array to ensure the path is clear to perform the lane change without violating
traffic controls for that stretch of the roadway; it must then execute the proper speed adjustments, signals,
and steering to perform the action.
All four of these capabilities are essential for an AV to function at all levels. Commercially viable level-5 AVs
must exhibit these capabilities with considerable accuracy and reliability and be able to react to all road
situations independently without human intervention. Achieving this level of reliability depends on another
critical piece in an AV system: The artificial intelligence (AI) platform, with its deep machine learning (ML)
and inference modeling, as well as its training to drive.
What’s Happening Today
Mouser Manufacturers Leading the Way
- On October 29, 2018, the Volkswagen Group, the Intel Mobileye autonomous vehicle platform, and
Champion Motors announced that they will deploy Israel’s first self-driving ride-hailing
service in 2019.
- In September 2018, Micron announced plans to invest in its facility in Manassas, Virginia, which
produces high-performance memory chips for AVs.
Much of the level-4 testing happening today is really training the system to react properly in every possible
driving scenario. Typically, this work starts small, with the test vehicle traveling a narrowly prescribed route
over and over. Through this process, the AV is being taught the basics through simulation. Every time it travels
a test route, all its operations are logged. If the AV gives control to a human driver, it’s called a
disengagement, and that human intervention becomes part of the log. Every AV disengagement is reported to the
DMV by regulation. When the car returns to the shop, engineers tweak the vehicle’s AI brain, using what it
learned on that test run. They then create a new inference model for the AV to use when the vehicle goes out
again. This process is complex deep learning that involves processing large volumes of | 28 unstructured data
and ultimately being able to make decisions in real time. Human interventions at this stage are a critical part
of the training process.
As the AV masters a route, the testers expand the vehicle’s range to take in new road situations and
geographies. That new geography may cover a community and then expand to an adjacent community. Waymo has been
training level-4 AVs in Phoenix, Arizona, and several other cities. It’s worth noting that many of these
testing programs cover routes and regions that would potentially support an AV-based business model, such as
Mobility as a Service. Major players are being strategic about where and how they test their vehicles.
The driving knowledge the AI system gains is cumulative and transferable to other vehicles. At the level-4
testing stage, driving knowledge enhancements may look like beta code updates. When advanced AVs become
commercially viable, they will receive regular updates of accumulated knowledge. The key is that the more miles
AVs log, the better and more reliable they become because they’re continuously learning systems.
What’s Ahead
In the coming year, we will see more about level-5 AV road tests, and AV developers will continue to address the
engineering challenges that remain. For example, current AVs do not fit into the social fabric of the road. If
you honk at an AV, it will not know what to do. It cannot communicate with pedestrians or other drivers. Recent
industry discussions have focused on establishing communications standards to avoid the chaotic situation of
every AV manufacturer having its own communication protocols.
Level-5 AVs will ultimately have no need for driver controls, and there will be no reason for everyone to face
forward. AVs will become high-bandwidth devices capable of streaming media for passenger consumption, and they
will have to do so without disrupting the control systems and connectivity associated with a vehicle’s
operation. These design parameters call for a complete redesign of car interiors.
It’s a tremendously exciting time for engineers in the automotive space. AVs are poised to transform the
auto industry through sensory and control systems and vehicle ergonomics up to a greater diversity of
purpose-built vehicles and an expanding role for AVs in the continued emergence of all-electric vehicles. It may
all be happening faster than we think.
Sudha Jamthe is CEO IoTDisruptions.com
and DriverlessWorldSchool.com, as well as a recognized technology futurist. She has more than 20 years of
experience with companies like eBay, PayPal, and GTE. Sudha is also the author of 2030: The Driverless World and
three books on the Internet of Things. She teaches courses about the business of IoT and autonomous vehicles at
Stanford Continuing Studies and DriverlessWorldSchool.com, is chair of the strategic advisory board for
Barcelona Technology School, and is an ambassador for FundingBox Impact Connected Cars