Best of NVIDIA GPU Drive Tech Conference News

At the NVIDIA GPU Conference there were several announcements from Toyota Research Institure, Velodyne, On Semi Conductor, Cepton, SoundHound and Mellanox. NVIDIA also introduced Safety Force Field.

Toyota Research Institute-Advanced Development (TRI-AD) and NVIDIA  announced a new collaboration to develop, train and validate self-driving vehicles.

The partnership builds on an ongoing relationship with Toyota to utilize the NVIDIA DRIVE AGX Xavier  AV computer and is based on close development between teams from NVIDIA, TRI-AD in Japan and Toyota Research Institute (TRI) in the United States. The broad partnership includes advancements in:

  • AI computing infrastructure using NVIDIA GPUs
  • Simulation using the NVIDIA DRIVE Constellation™ platform
  • In-car AV computers based on DRIVE AGX Xavier or DRIVE AGX Pegasus™

The agreement includes the development of an architecture that can be scaled across many vehicle models and types, accelerating the development and production timeline, and simulating the equivalent of billions of miles of driving in challenging scenarios.

Simulation has proven to be a valuable tool for testing and validating AV hardware and software before it is put on the road. As part of the collaboration, TRI-AD and TRI are utilizing the NVIDIA DRIVE Constellation platform for components of their simulation workflow.

DRIVE Constellation is a data center solution, comprising two side-by-side servers. The first server — Constellation Simulator — uses NVIDIA GPUs running DRIVE Sim™ software to generate the sensor output from a virtual car driving in a realistic virtual world. The second server — Constellation Vehicle — contains the DRIVE AGX car computer, which processes the simulated sensor data. The driving decisions from Constellation Vehicle are fed back into Constellation Simulator, aiming to realize bit-accurate, timing-accurate hardware-in-the-loop testing.

This end-to-end simulation toolchain will help enable Toyota, TRI-AD and TRI to bring automated vehicles to market.

Jensen Huang announced Safety Force Field — a driving policy designed to shield self-driving cars from collisions, a sort of “cocoon” of safety.

Safety Force Field™ (SFF™), a robust driving policy that protects the vehicle, its occupants and other road users.

SFF analyzes and predicts the dynamics of the surrounding environment by taking in sensor data and determining a set of actions to protect the vehicle and other road users. The SFF framework ensures these actions will never create, escalate or contribute to an unsafe situation and includes actions necessary to mitigate potential danger.

Backed by robust calculations, SFF makes it possible for vehicles to achieve safety based on mathematical zero-collisions verifications, rather than attempting to model the high complexity of real-world scenarios via limited statistics. Running on the NVIDIA DRIVE platform, frame-by-frame, physics-based SFF computations are performed on vehicle sensor data.

SFF has also undergone validation using real-world data and bit-accurate simulation, including scenarios involving highway and urban driving that would be too dangerous to recreate in the real world.

SFF has the ability to take into account both braking and steering constraints. This dual consideration helps eliminate several problematic vehicle behavior anomalies that could arise if they were separated. The policy follows one core principle of collision avoidance as opposed to a large set of rules and expectations.

SFF is an open platform and can be combined with any driving software. As a safety-decision making policy in the motion planning stack, SFF monitors and prevents unsafe actions. It cleanly separates obstacle avoidance from a long tail of complicated rules of the road. When running on a high-performance compute platform like NVIDIA DRIVE, it adds another layer of diversity and redundancy features to deliver the highest levels of safety.

Velodyne Surround View for NIVIDIA Drive

Velodyne Lidar, Inc. announced its surround-view lidar solutions for collecting rich perception data in testing and validation are available on the NVIDIA DRIVE™ autonomous driving platform–allowing full, 360-degree perception in real time, facilitating highly accurate localization and path-planning capabilities.

Velodyne sensors’ characteristics are also available on NVIDIA DRIVE Constellation™, an open, scalable simulation platform that enables large-scale, bit-accurate hardware-in-the-loop testing of AVs. The solution’s DRIVE Sim™ software simulates lidar and other sensors, recreating a self-driving car’s inputs with high fidelity in the virtual world

Velodyne provides the industry’s broadest portfolio of lidar solutions, which spans the full product range required for advanced driver assistance and autonomy by automotive OEMs, truck OEMs, delivery manufacturers, and Tier 1 suppliers. Proven through learning from millions of road miles, Velodyne sensors help determine the safest way to navigate and direct a self-driving vehicle. The addition of Velodyne sensors enhances Level 2+ advanced driver assistance systems (ADAS) features including Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Lane Keep Assist (LKA).

SoundHound

SoundHound Inc., the leading innovator in voice enabled AI and conversational intelligence technologies, today unveiled its large vocabulary, hybrid voice and natural language understanding interface for in-vehicle infotainment systems at the NVIDIA GPU Technology Conference (GTC) 2019. The event marks the first time the technology has been shown to the public, and highlights the NVIDIA DRIVE™ ecosystem collaboration between SoundHound Inc. and NVIDIA.

Leveraging the patented Speech-to-Meaning and Deep Meaning Understanding technologies from SoundHound Inc.’s Houndify Voice AI platform, running on NVIDIA DRIVE IX™, the solution enables real-time responses to voice queries in vehicles, even without Internet connectivity. This is achieved with high speed and accuracy through a hybrid speech recognition system that processes voice requests both in the cloud and locally on the embedded system (for when an internet connection is not available) to return fast responses. The embedded system also enables drivers to control their car’s functions when a connection to the cloud is unavailable including the car’s climate control, window controls, radio, navigation, and more.

With Houndify, drivers can now interact with hundreds of domains—programs that provide users with relevant information or actions related to their queries. These include: navigation, weather, stock prices, sports scores, flight status, local business searches, and hotel searches with complex criteria, among others.

SoundHound Inc.’s Houndify technology is already being utilized by leading manufacturers including Mercedes-Benz, Groupe PSA, Hyundai, Honda, and others.

 

ON Semi Sensor Modeling Real-Time Data for NVIDIA DRIVE

ON Semiconductor announced that it is leveraging its sophisticated image sensor modeling technology to provide real-time data to the NVIDIA DRIVE Constellation™ simulation platform. The open, cloud-based platform performs bit-accurate simulation for large-scale, hardware-in-the-loop testing and validation of autonomous vehicles.

ON Semiconductor’s Image Sensor model receives both scene information and control signals from DRIVE Constellation to calculate and output a real-time image based on the inputs. It then transmits the simulated image back to DRIVE Constellation for processing. The complex sensor model will utilize all critical parameters in the path from converting photons to digital output (e.g. Quantum Efficiency, Noise, Gain, Analog-to-Digital Conversion, Black Level Correction and more) to provide an accurate output of a real-world image sensor.

Cepton Tech 3D Lidar

Cepton Technologies, Inc., a provider of 3D LiDAR solutions for automotive, IoT, industrial, security, retail and mapping applications, today announced the Vista-Edge LiDAR Evaluation Kit, an edge processing system combining Cepton’s Vista™ LiDAR sensor and the NVIDIA Jetson TX2 supercomputer on a module.

Vista-Edge is a true plug-and-play device with all necessary software and tools pre-installed to view and analyze the LiDAR’s 3D point cloud of data. The pre-installed software SDK and sample code will accelerate the ability for customers to develop their own product-specific software that utilizes Cepton’s sensors. Customers who buy the LiDAR Evaluation Kit will automatically be eligible to upgrade to the Perception Evaluation Kit when it becomes available later in 2019.

Vista-Edge features a compact, lightweight design and has ports for 1Gb Ethernet, HDMI, USB 3.0 and USB 2.0 connections and can function on IoT and Wi-Fi networks. Out of the box, the system takes only a few minutes to set up before customers can view the LiDAR’s data.

Mellanox — whose interconnect technology helps power more than half the world’s 500 fastest supercomputers — complements NVIDIA’s strength in data centers and HPC, Jensen Huang said, explaining why NVIDIA agreed to buy the company earlier this month.