The Data Center on Wheels

Author:
Amir Bar-Niv, Vice President of Marketing, Automotive, Marvell

Date
02/20/2025

 PDF
Cars contain 3-4x more chips than they did a decade ago and the value of those chips in higher-end models has grown from around $350 to $2,000

Click image to enlarge

Figure 1: Communication breakdown: Communication between the rear right camera (red) and rear left camera requires a trip to the front of the car in domain architectures and connection to the right front brake (yellow) is impossible. In zones, it is a direct connection....

­EVs can contain 3x or more chips than internal combustion engine (ICE) vehicles depending on the complexity of the vehicle. The growth of the silicon footprint in cars has paved the way for things like lane change sensors, infotainment consoles, downloadable apps, etc. - electronics have fundamentally changed the driving experience for the better.

This success, however, is also causing growing pains. Larger amounts of data will be shuttling at a faster rate between brakes, cameras, and other devices. Security updates are not only urgent, they have to be delivered in a way that’s seamless. Autonomous cars? Futurists say these top-of-the-line vehicles could generate several gigabytes to several terabytes of data per day.

To solve the problem, car makers are taking their cue from the cloud and adopting principles for data center design for their cars: they’re investing in more powerful (and sometimes custom) processors, increasing on-board storage and trying to organize all of these elements in a holistic manner to reduce power, cost and energy consumption. Just as important, they have to fit these data center capabilities without the benefit of server racks, external cooling on on-site technicians you’d have in a data center. Cars have to be vehicles first.

Going Zonal

Like data centers, a better computing infrastructure starts with a better network. Think of the network as the blueprint that determines the organization and interoperability of all of the components. Fifteen years ago, the car network was really a tangle of wires: it consisted of point-to-point connections between sensors and warning lights. Manufacturers would brag about having 70 or 100 processors in one of their cars so you can imagine the complexity.

Then came the rise of domain architectures where sensors and microcontrollers were grouped into networks by function. The infotainment domain, for instance, might have included the radio, driver’s console and navigation system while the body domain would have included the brakes and lights, and the ADAS domain included cameras and sensors.

While domain architectures were an improvement, they don’t scale well. Each domain had its own processor or controller that was connected directly to all the agents of that domain around the car. This required many cables for each domain, all running in parallel, which created a complicated, costly cable harness.    

The cable harness is typically the third heaviest component in a car; stretched end-to-end, some can measure over 1.5 kilometers. Cable harnesses also typically need to be installed by hand, adding production costs. Robots aren’t sensitive enough for this work and finding voids to place the cables changes across models.

Enter zonal architectures. In zonal architectures, all of the devices in a particular physical zone of a vehicle are linked to a local zonal aggregation switch with short cables. The right front zonal switch, for example, would control all of the devices in its zone: the right front brake, the right headlight and tire sensors, etc., even if these devices live in separate domains. Zonal aggregation switches—and there will be four to six zones in cars by the second half of the decade—then link to central switches that coordinate traffic flow between the zones.  

The advantages are numerous. First, as the picture shows, the amount of cable needed to connect everything is drastically reduced by eliminating point-to-point connections. Fewer cables mean less weight which means better mileage in ICEs or longer range in EVs. Second, and arguably most important, communication is typically more rapid, a key consideration for the real-time computing problems drivers face. In domain networks, linking one device to another on a separate domain could take several more hops if connection could be completed at all.

Third, it provides a way to use computing resources more effectively. With zonal architectures, manufacturers can replace dozens of fixed-function microcontrollers with powerful, centralized, programmable CPUs or AI accelerators that can perform complex tasks that would have been impractical or impossible on controllers. Storage can be centralized as well. Instead of megabyte-sized pods planted across domains, several GBs can be located near the CPUs or AI accelerators to serve up the data needed for real-time diagnostics or automated parking. And, if the history of computing is any guide, cutting-edge applications in luxury models will trickle down across the line as chips and other devices get more powerful and efficient over time.

Technology Transfer

The default choice for zonal networking is Ethernet, the networking standard for IT networking for over 50 years. The number of Ethernet ports shipped annually to the automotive industry will likely pass 1 billion over the next few years, more than double today’s rate and 10x the number shipped in 2018; toward the end of the decade, the number of Ethernet ports shipped to automotive may even pass the number shipped to data centers. (Figure 2).

Click image to enlarge

Figure 2: Auto Ethernet ports won’t be as fast, but annual shipments could soon reach over 1 billion. (Marvell Technology, Inc. and industry analyst estimates.)

 

Additionally, Ethernet will make scaling networks easier. Current domain switches are capable of transferring data at around 1-10 gigabits per second. After 2028, zonal switches will climb to up to 25 Gbps with central switches topping 90 Gbps and, if history is any indication, bandwidth will continue to double every three to four years.

Why would anyone need a 90 Gbps switch in a car? Cameras, for one thing. A dash cam recording at 1080p resolution will effectively generate 6GBs per hour. That means driving an hour and a half per day every week generates 63GB per week. Now multiply that by the number of cameras cars may have in the future. And then remember that a good portion of this data will have to be delivered and analyzed in real time: safety systems will only have a brief period of time to determine whether it’s a cow or a strange shadow in the road. Traditional cameras will also be complemented by LiDAR and thermal imagers to supplement a car’s awareness and improve detection of obstacles in situations where lighting or traditional visibility is suboptimal. Any way you examine the problem, the amount of data will be vast and so will the computing power needed to analyze it. Camera bridges, which can streamline the process of managing the large volumes of real-time data generated by cameras, will play a pivotal role in ensuring real-time processing systems like this are both functional and accurate. Early versions of these devices have been developed and may begin to appear in cars in the second half of the decade.

Car Ethernet networks, of course, won’t be identical to what you find in data centers. In some ways, they will be more sophisticated. Most automotive processors and Ethernet devices, for instance, contain redundant, parallel cores. In case one fails, the other takes over. This is the basic concept of Functional Safety in Automotive. Data centers don’t need this kind of resiliency. Ethernet advances for cars, in fact, will likely percolate to other markets such as robotics, medical and industrial IoT, where networking is likewise evolving.

We don’t know exactly where the road will go, but we have a pretty good sense. And by leveraging the technology and lessons learned from data centers, we can get there faster.

 

Marvell

RELATED