4D Imaging Radar helps L2+ and Higher Vehicle Enablement

Author:
Huanyu Gu, Director Product Marketing and Business Development ADAS, NXP Semiconductors

Date
11/28/2022

 PDF
Commercially viable 4D imaging radar technology can enable the broad proliferation of L2+ ADAS and provide a path toward fully autonomous L5 vehicles

Click image to enlarge

Figure 1: Levels of ADAS and autonomous driving

­The adoption and evolution of advanced driver-assistance systems (ADAS) and autonomous driving (AD) are crucial to safer driving, avoiding accidents, and saving lives. New Car Assessment Program (NCAP) ratings and five-star safety ratings accelerate this shift, making automatic emergency braking, blind-spot detection, or vulnerable road user detection mandatory.

The introduction of 4D imaging radar technology for vehicle safety and autonomous driving applications has reshaped the timing and economics of the automotive industry’s evolution from L0 to fully autonomous L5 vehicles. By enabling precise 360-degree environmental mapping and fine resolution object detection at far distance, imaging radar redefines industry expectations for its role relative to cameras and lidar sensors. In terms of performance and reliability, imaging radar closes the gap with lidar at a commercial cost that lidar may never come close to.

While the automotive OEMs navigate the many design complexities required to achieve L3 conformance, attention has turned to a transitional level, L2+, see figure 1. Here, L2+ provides the similar capabilities as L3, while the driver continues to provide redundancy. During this interim, OEMs can optimize the balance between features and costs and gradually introduce L3 ‘light’ vehicles.

Imaging radar evolution

In the early days, radar technology was mainly used for seeing other cars. Essentially, these were 2D-capable sensors that measure speed and distance. Today’s state-of-the-art 4D imaging radars measure speed and range, as well as horizontal and vertical angles. This capability allows the vehicle to see and differentiate between cars and, more importantly, pedestrians, bicycles, and smaller objects.

This is made possible with Multiple Input Multiple Output (MIMO) antenna arrays that generate large numbers of virtual channels. These channels are proportional to the radar aperture and inversely proportional to the angular resolution. Therefore, the more virtual channels, the finer the angular resolution. MIMO does introduce artifacts and ambiguities to the received radar signals; however, these can be mitigated.

MIMO waveform construction and artifacts mitigation are by themselves dedicated areas for innovation. Tier-1 and OEM customers may develop own proprietary MIMO waveforms and solutions for mitigating the artifacts associated. As alternative to customers’ own solutions, NXP provides advanced MIMO waveforms and artifacts mitigation techniques as well to customers with its Premium Radar SDK, allowing customers to reduce the R&D effort and accelerate the product time-to-market. The implementation of these advanced techniques are optimized for the underlying NXP radar processors, such as the S32R45, allowing Tier-1 and OEM customers to extract the most performance out of synergy from the aligned software and hardware development.

With these improved capabilities, radar sensors are able to deliver high-resolution point cloud outputs, enabling a higher resolution mapping of the environment and scene awareness that’s expected for higher level of ADAS and autonomous driving systems. Yet they come at a much lower cost than lidar, making them suitable for widespread volume adoption. On top of that, 4D imaging radar opens up a unique capability of multi-mode operation — detecting objects simultaneously in all ranges, from near to the farther end of up to 300 meters. Added to the ability to operate independent of environmental conditions, radar sensors are anticipated to become the most fundamental, versatile and robust sensing technology for the sensor suites of vehicles of any AD level.

Challenging autonomous driving use cases

Camera and radar sensors complement each other well and are destined to be widely deployed from L1 to L5. At L2, sensor fusion is required for the vehicle to exert lateral and longitudinal control simultaneously. At L2+ and L3, the vehicle is expected to combine multiple lateral and longitudinal controls to handle significantly more complex, task-oriented use cases.

The highway pilot use case is a good illustration of the advanced capabilities of 4D imaging radar, see figure 2. While the vehicle travels at speeds up to 130km/h, it can initiate lane change and overtake the slower car ahead. During this overtake maneuver, the car needs to look far ahead as early as possible to identify danger and reliably secure free space and a safe distance from cars and motorbikes in neighboring lanes.

Click image to enlarge

Figure 2: Simultaneous triple beam multi-mode for short-, mid- and long-range capability

 

Conventional 2D imaging radar has a limited detection range and lacks the angular resolution to identify and secure the needed free space. At L2+, the driver is expected to navigate complicated maneuvers such as these when required. Lidar would merely provide an additional layer of redundancy and can therefore be left out of L2+ designs for cost optimization reasons.

Urban pilot is another example of a complex, task-oriented use case. In this hazard-rich urban environment, the vehicle should be able to safely avoid pedestrians and pets crossing unexpectedly into traffic at speeds of up to 70 km/h. Adding to this complexity are time-bounded streets that automatically reconfigure into one or two lanes depending on the time of day must, though this would be an L5 use case.

Highly accurate object recognition and classification become essential in this use case, as does a highly detailed environmental mapping capability. At L4 and L5, the driver is not required to take control of the vehicle; here, full redundancy is no longer an option – it is essential.

For this reason, L4 and L5 vehicles are expected to use all three primary vehicle sensor technologies — camera, radar and lidar — integrated within the sensor suite. High-performance 4D imaging radar sensors can help achieve L3 and higher without the need for more than one lidar per vehicle for redundancy purposes, enabling OEMs to achieve optimized cost at the sensing and processing layers.

Conclusion

The advent of commercially viable 4D imaging radar technology has major implications for enabling the broad proliferation of L2+ ADAS features and a path toward fully autonomous L5 vehicles. The role and significance of the camera, radar, and lidar sensing technologies for each AD level depend on their relative strengths. Undoubtedly, radar sensors will become the most fundamental sensor, supplemented by cameras for L2+ and L3, and by cameras and lidar for L4 and L5.

 

NXP

RELATED