Author:
Richard Herbert, automotive networking product marketing manager for Microchip Technology’s USB and Networking business unit
Date
12/01/2023
The development of basic Advanced Driver Assistance Systems (ADAS) is now commonplace, but to enable enhanced features—even ones we have not dreamt of yet— that bridge varying levels of complexity, there is a need to provision vehicles with high-performance SoCs and connect them to a standard communication interface so that the data processed can be shared as rich data sets for other functions in the vehicle.
This could enable separate vehicle functions to work in concert, for example, by making hazard detection data from cameras and radar, available to front facing headlamps to highlight features in the road, the radius of a bend, or a pedestrian stepping out. PCIe® enables data sharing across many SoCs to span multiple vehicle functions, while the software model it uses enables a seamless expansion of capacity across PCIe generations, SoC generations and ultimately automotive platform generations. The desire to future proof platforms today for multiple years drives the adoption of PCIe technologies.
The history of PCIe
The history of PCI goes back generations to a parallel computer interface bus, used in many servers, embedded and home PCs as a peripheral interconnect. It was defined and standardized inside the PCI SIG (Special Interest Group). PCIe allowed many devices to connect to a single SoC. Initially, this was predominantly Intel or AMD x86 devices, connectingto a network, a high-speed peripheral or a graphics controller. It evolved from 32-bit to 64-bit data busses running from 32 Mhz to 64 MHz and eventually became serialized. PCIe allowed high-speed data connections over a differential connection, one for transmit and one for receive. A pair of differential connections is what is commonly called a “lane”. Lanes can be grouped into ports depending on the peripheral or SoC attached, the width may go from one lane per port, up to 16 or 32 lanes per port.
Click image to enlarge
Figure 2: PCI Express link performance
Advantages of an Ethernet Network
In the PC example the connectivity hierarchy is simple, there’s a powerful single SoC that has multiple high-speed peripherals connected to it. This was the same in servers, but as servers evolved, having two SoCs in close proximity and sharing peripherals—access to storage, high-speed memory and sometimes data—was an advantage. This is the foundation of PCIe switches. A PCIe switch will allow independent, separated access of an SoC (signified as a Root Complex) in a multi-SoC system to a common set of peripherals, known as Endpoints. Each SoC thinks it has independent access to those peripherals, because of a feature known as Non-Transparent Bridging (NTB). An SoC using a PCIe switch in this type of system takes advantage of the very high speed, very low latency data connectivity, typically over a short distance..
Contrast this to Ethernet. Built on a standard developed by the IEEE, and having full backward compatibility, it is intended to reach devices metres apart, across backbones that are today either 100Mb/s or 1000Mb/s rising to 10Gb/s in the future. Moreover, Ethernet has been developed inside the Open Alliance to add specific automotive features like Signal Quality Indicators and Wake/Sleep mode. Layer 2 Media Access Control (MAC) based switching allows a packet to go from any point—through Physical interfaces at different Ethernet speeds—on the switch devices to any other point in a vehicle. The homogenous nature of a ubiquitous Ethernet network is its advantage and is widely supported among industry users.
The nature of Ethernet working at relatively high-speed and over longer reaches while supporting enhancements to the standard is what sets it apart from PCIe working over shorter reaches, at very high data rates with a relatively simple over-head. However, we should not assume that one supplants the other, rather they both have solid footprints inside vehicles for all these reasons.
Click image to enlarge
Figure 3: Zonal networks allow a homogenous Ethernet network with PCIe connected central compute at its heart
PCIe Enables Multiple Device Connectivity
While I have described two situations where SoCs may communicate with each other, the advent of Level 3 ADAS and above is really what is driving them together. With safety critical systems often incorporating live camera data into their decision processes, not only is it essential to have raw live video data that is not compressed, additional latency included through packing into an Ethernet or any other type of frame must be avoided. In most cases camera data is directly interfaced into appropriate video interfaces on the SoC. There may be more than one SoC to share this workload or real-time acquisition, as well as co-processing of the video streams. Accelerators are also available to specifically support AI and ML workflows and datasets.
Reducing the latency of data transfer inside this high-performance compute within the vehicle, where the same data set is often shared, is paramount and lends itself to being in the domain of PCIe. The device that allows this common sharing of data between multiple SoC root complexes and devices such as SSDs and network interface cards is a PCIe switch, also offering the NTB features described earlier. Simpler connectivity options might be available through fabrics, but it is the software and configuration definability that the PCIe switch offers that allows full use of system resources. Resource sharing is a key system feature that OEMs want to actively utilize.
A PCIe switch also offers distinct advantages when it comes to designing sometimes cost optimized and always safety critical systems.
A PCIe switch can allow modularized connection of SoCs, maybe part populated at the end of line, and then upgraded at a dealer with additional modules. Though all vehicles will support over- the air updates and enhancement, use of HPC also opens up the revenue stream to an OEM from subsequent owners of the same vehicle. For example, new revenue streams can be created beyond the first or second owner through the offer of significant performance or feature upgrades. The ‘Car As A Service’ is born.
The safety critical element cannot be ignored. A PCIe switch also allows co-locating of similar working units within the same vehicle, such that one failure is managed by a working system to prevent a hard fault.
How PCIe Provides a Software Model that Scales
The nature of PCIe working in all these systems is very similar. All SoCs utilize common transactions to move data from memory into the address space of a PCIe device somewhere on the switch. This scales between SoCs and between generations of PCIe and has been so since the days of the original PCI.
On top of this, the PCIe switch configuration can be made to adopt the maximum configuration even if partial fitting is done at end of line of a subset of modules or resources. Examples might be fitting one SoC but allowing upgrade to three. Adding a single SSD but allowing an additional one later. And even allowing a failover PCIe switch to be added to the system later.
From the PCIe switch side we can allow the future evolution of the system to be streamlined. SoC software for PCIe support can remain static and the OEMs developer resources focused on the users features they will support in the SoC module being fitted. The PCIe switch will be the single element in the system that has to deal with performance and system upgrades, using features supported through the switch itself and not needed to be supported by the SoCs.
Examples of upgradeability might include inclusion of a Virtual Root Complex, allowing the switch to recognize additional SoCs over the lifetime of a platform, allowing swapping of SoC options, and even enhancements to securely authorizing SoC access to the shared resources in a secured manner once authenticated.
A second example is a popular data centre mechanism to allow multiple SoCs to access solid-state drives with each SoCs access being independent, protected and managed by a mechanism called SR-IOV (Single Root IO Virtualisation). This is usually done with a complicated software stack on the SoC, but we can orchestrate this from a single driver inside the switch. This is again another example where the complexity of software development is simplified, singularized, and still accessible by the standardized PCIe driver on the SoC.
Different Ways to Expand PCIe Capacity
As we already mentioned SoCs can be installed as swappable units on a backplane containing the PCIe switch. The system can be either fully populated initially or populated over future expansions. PCIe connectivity can also be expanded by adding additional backplanes or chassis and interconnecting them in some way. PCIe connectivity is usually achieved over backplane PCB traces and soldered down connectors at short reaches. But what if one needed to enable a second chassis? In many cases it is easier to make this addition as a complete second unit, rather than as an upgrade to an already fitted unit. The second chassis may enable next generation features, think of a Level 4 upgrade to your factory provisioned Level 2 vehicle. Or it may add a performance upgrade with added reliability (think worker/standby and failover).
It's easier to provision a separate space in the vehicle and provision a wiring harness to this place than it is to strip back devices to access an already fitted unit. Connectorized and cabled connectivity is now a requirement. And while we can always do this for multiple metres in the Ethernet domain, we would do this at the expense of data bandwidth. Doing this at high speed over a short cable could be the answer to this upgrade dream.
Microchip has been developing an Automotive PCIe reference board fitted with industry standard H-MTD connectors and using non automotive data centre cable assemblies. This was done to evaluate the use of off the shelf components in enabling PCIe extended reach over cable as well as helping our clients understand the demands of EMC compliance in these systems. In real world testing several metres of reach has been achieved using both Gen3 and Gen4 PCIe links.
FuSa in PCIe System with High Performance SoCs
Many advanced SoC's in the class used for autonomous and semiautonomous driving support higher levels of Function Safety (FuSa), for instance ASIL level D. Level D needs multiple cores in lock step which is cost prohibitive in the class of many of the SoCs that are advanced enough in performance to deliver the ADAS function. Functional decomposition and partitioning is needed with components at ASIL level B to achieve a system rating at level C or above. PCIe offers many protections making this system certification possible, in links at the physical and virtual container level, and through internal data paths. This inherent path protection, functional decomposition, partitioning and the ability to have standby or failover units in a system as we described previously is what we can provide to a modern OEM in this safety critical domain of Level 3 driver assist and something every manufacturer wants to see.
When It All Comes Together - Conclusion
While the end goal might be perceived to be Level 4 autonomous driving, an OEM wants to adopt a platform approach allowing features and services to be added from models starting with Level 2 or Level 3 ADAS. This forces a mindset of scalability and expansion rather than pre-provisioning for the futures most demanding workloads. Offering SoC scalability, extended resource sharing over time and the addition of additional chassis containing PCIe switches allows a base platform to be common between models and platforms over many model years. The consumer will use a ‘Car As A Service’ based on the software defined vehicle utilising the data centre on wheels. All of this is achievable today.