Rivian Showcases New Technology at Autonomy and AI Day Presentation
Early in December Rivian held their inaugural technology focused event where media got a peek at what's to come.
Earlier this December, Rivian held an event where they highlighted some of the new innovations that consumers can look forward to, namely their third-generation autonomy hardware, multi-modal sensor suite, vehicle software, and expansion into LiDAR.
"At launch, our third generation autonomy hardware will have a leading combination of vehicle sensors and inference available in North America."
Electrical Hardware
The following paragraphs are taken directly from their press release.
Rivian’s third generation compute module, Gen 3 Autonomy Computer, achieves 1600 sparse INT8 TOPS (trillion operations per second). To illustrate the sheer speed, Gen 3 Autonomy Computer can process 5 billion pixels per second.
At the heart of Gen 3 Autonomy Computer is the Rivian Autonomy Processor (RAP1), our proprietary, purpose-built silicon. RAP1 is among the first multi-chip modules used in high compute applications within the automotive industry.
Rivian’s in-house neural net engine (NNE) runs directly on the chip.
RAP1 is optimized for interfacing with many different sensor modalities to do inference processing with deterministic latencies—an important requirement for Physical AI.

This is a big deal for Rivian and really the auto industry as a whole, because typically automakers do not have the ability or see the benefits of spending the capital to engineer their own custom computer chips. Tesla also utilizes custom silicon, but they are really the only other western automaker to make this commitment.
Deciding to use LiDAR
Other automakers have hesitated to utilize LiDAR due to the high cost, namely Tesla, but Rivian is moving forward with integration of the technology into their fleet.
Building on this foundation, we will integrate LiDAR into our fleet, starting with future R2 models. LiDAR provides detailed, three-dimensional spatial data and redundant sensing —this helps make our R2 fleet a very large ground truth fleet for training our model.
Tesla famously removed LiDAR, ultrasonic sensors, and radar from its vehicles, relying solely on Tesla Vision (an 8-camera system). This system has proven to not be perfect of course, but that's understandable. The debate is how much more effective will your autonomous driving be with LiDAR rather than just cameras.

Autonomy Advancements
According to Rivian:
Rivian Autonomy Platform begins with a self-improving data flywheel that’s designed to learn about human driving—especially the unpredictable nuances. Pre-tagged instances are transmitted to the Rivian Cloud, where the high-fidelity data sets are automatically organized and labeled. With near real-time capability, it is a pipeline that operates with extreme speed fleet-wide, every minute of every day, and builds an ever-expanding knowledge base.
This knowledge base fuels the Large Driving Model (LDM), Rivian’s foundational self-driving model. Its LLM-like architecture enables us to apply advances in generative AI directly to our autonomy stack, including reinforcement learning which distills superior driving strategies to the on-board models with minimal compute overhead.
LDM is improving capabilities on Gen 2 vehicles now and will see even more advancement in future vehicles with Gen 3 Autonomy Computer.
With this system in place, the whole stack is always improving with every release, and we have a feature roadmap that stretches to the highest levels of autonomy.
This is interesting because it's Rivian saying "hey, we're kind of using the same technology that ChatGPT uses," which is the first time I've seen an automaker try to directly tie their in-vehicle software suite to resemble something like ChatGPT or Grok. It's a smart move because it sounds good to investors, even if the technology might be slightly different.

Comments ()