Autonomous & Self-Driving Vehicle News: VW, AEye, Continental, Cepton, Aeva, Arbe Robotics & TIER IV

In autonomous and self-driving vehicle news are VW, AEye,Continental, Cepton, Arbe Robotics and TIER IV.

VW IEV Level 5 Mobility

At Chantilly Arts & Elegance near Paris, the Volkswagen Group presented an innovative design study that will redefine the long-distance mobility of the future. The all-electric powered Innovation Experience Vehicle (IEV) is a real prototype that drives autonomously (Level 5) and gives a realistic outlook for the mobility of the coming decade. The car’s modular interior concept makes it a flexible, sustainable Mobility-as-a-Service alternative to short-haul flights. As a research vehicle, the purpose of the Gen.Travel is to test the concept and new functionalities for customer response. Based on the study results, individual features may later be transferred to series vehicles.

AEye Partners with Continental and NVIDIA DRIVE Sim

AEye, Inc. (NASDAQ: LIDR), a global leader in adaptive, high-performance lidar solutions, today announced that Continental’s HRL131 Long Range Lidar, based on AEye’s patented architecture, is now available for testing and development on the NVIDIA DRIVE Sim™ platform. NVIDIA announced, Continental and AEye, will utilize the NVIDIA DRIVE Sim platform to enable AV and ADAS customers to rapidly simulate a fully adaptive lidar system in a variety of autonomous driving edge cases and environments. This saves OEMs time during development and testing, accelerating time-to-market for commercial deployment.

“The software-defined nature of the HRL131 means it is situationally aware, with the ability to adapt its scan pattern depending on the driving scenario to maximize safety,” said Jordan Greene, GM of Automotive at AEye. “It’s critical that manufacturers be able to test and validate these performance modes and the product’s performance in diverse situations, which NVIDIA DRIVE Sim will uniquely enable.”

The adaptive nature of Continental’s HRL131 Long Range Lidar is capable of deploying multiple scan patterns, including different fields of view (FOV), for disparate driving conditions and use cases. By integrating with NVIDIA DRIVE Sim, developers gain the ability to simulate the performance of different scan patterns in order to define and optimize each, and further refine the performance of the HRL131 to match their specific requirements.

This is especially important for high-speed highway driving, where even a small object can make a big impact. NVIDIA DRIVE Sim allows the recreation of obstacles in many shapes and sizes, with accurate physical based rendering, in complex highway environments. With a digital twin of the HRL131, and various environments and differing conditions of its usage in NVIDIA DRIVE Sim, OEMs and AV developers can determine which performance modes are more suitable for the chosen application based on solving for complex edge cases. Once identified and tuned, performance modes can be swapped on-the-fly using external cues such as speed, location or vehicle pitch.

“With the scalability and accuracy of NVIDIA DRIVE Sim, we’re able to validate our long-range lidar technology efficiently,” said Gunnar Juergens, Head of Product Line LiDAR at Continental. It’s a robust tool for the industry to train, test and validate safe self-driving solutions.”

Continental developed the HRL131 in partnership with AEye, who licensed its 4Sight Intelligent Sensing Platform, reference architecture, and software to Continental to provide the basis for the product. The long range lidar offers maximum leverage for passenger vehicle applications, because it combines a high dynamic spatial resolution with long range detection. With the HRL 131, vehicles can be detected at more than 300 meters, and pedestrians at more than 200 meters. Continental is integrating the long range lidar technology into its full sensor stack solution to create the first full stack automotive-grade system for Level 2+ up to Level 4 automated and autonomous driving applications.

Aeva LiDAR Supported by NVIDIA DRIVE Sim

Aeva® (NYSE: AEVA), a leader in next-generation sensing and perception systems, today announced its Aeries™ 4D LiDAR™ sensors are now supported on the NVIDIA DRIVE Sim platform. Aeva’s Frequency Modulated Continuous Wave (FMCW) 4D LiDAR sensors detect 3D position, in addition to instant velocity for each point at distances up to 500 meters, bringing an added dimension to sensing and perception for safe autonomous driving.

NVIDIA DRIVE Sim is an end-to-end platform architected to run large-scale, physically based, multi-sensor, autonomous vehicle simulation to improve developer productivity and accelerate time to market. DRIVE Sim includes a unique and powerful ray-tracing application programming interface that allows developers to accurately model complex sensor technologies by giving them full control of ray firings and hit point returns. This enables modeling advanced physical phenomena to support new perception sensing technologies such as FMCW LiDAR, for any set of requirements, in a well-organized structure.

“With Aeva’s 4D LiDAR integrated, OEMs using the NVIDIA DRIVE Sim platform can accelerate the realization of safe autonomous driving for the next generation of autonomous vehicles,” said James Reuther, Vice President of Technology at Aeva. “Our unique ability to directly detect the instant velocity of each point, in addition to precise 3D position at long range, surpasses the sensing and perception capabilities of legacy sensors. This requires a platform capable of handling our unique LiDAR features, and DRIVE Sim enables just that.”

“DRIVE Sim was designed to accurately model advanced sensors such as 4D LiDARs,” said Zvi Greenstein, general manager of automotive at NVIDIA. “The physics-based and time-accurate simulation allows Aeva and its customers to train perception networks, accelerate development and validate software before deploying to production fleets.”

Developers using DRIVE Sim now have access to Aeva’s 4D LiDAR models to increase their simulation capabilities and streamline testing and development. Aeva sensors enable the next wave of driver assistance and autonomous vehicle capabilities using FMCW technology to deliver sensing and perception capabilities that are not possible with legacy time-of-flight 3D LiDAR sensors, including:

  • Instant Velocity Detection: Directly detect velocity for each point in addition to 3D position to perceive where things are and precisely how fast they are moving.
  • Ultra Long Range Performance: Detect and track dynamic objects such as oncoming vehicles and other moving objects at distances up to 500 meters.
  • Ultra Resolution™: A real-time camera-level image providing up to 20 times the resolution of legacy time of flight LiDAR sensors.
  • Road Hazard Detection: Detect small objects on the roadway with greater confidence at up to twice the distance of legacy time-of-flight LiDAR sensors.
  • Semantic Segmentation: Real-time segmentation between static and dynamic points enables the detection of roadway markings, drivable regions, vegetation and road barriers.

RoboSense, a world-leading provider of Smart LiDAR Sensor Systems announced XPENG G9 equipped with two RoboSense second-generation Smart Solid-state LiDARs (RS-LiDAR-M1). On September 21, XPENG Motors held the “G9 Flagship SUV Immersive Experience” online launch event and officially launched XPENG G9. XPENG G9 is positioned as “the world’s Fastest-Charging electric SUV”. It achieved a breakthrough and went going from development of “independent functions” to “global intelligence”.

Cepton Partners with NVIDIA

Cepton, Inc. (“Cepton” or the “Company”) (Nasdaq: CPTN), a Silicon Valley innovator and leader in high-performance lidar solutions, announced that it is collaborating with NVIDIA to add Cepton lidar models into NVIDIA DRIVE Sim™.

As a member of the NVIDIA DRIVE Sim ecosystem, Cepton is adding an accurate digital twin of its lidar technology to the platform. These models will help automakers and autonomous vehicle developers accelerate lidar deployment while minimizing real-world test driving.

The Cepton lidar extension offers high-fidelity simulation and visualization of Cepton’s state-of-the-art lidar sensors. Cepton is the first lidar partner to provide dual near- and long-range simulation. This capability is simulated and demonstrated in NVIDIA’s DRIVE Sim platform. The two-fold detection enables blind spot elimination (near-range Nova lidar) coupled with obstacle detection at highway speeds (long-range Vista-X series), across various true-to-life driving scenarios.

By working with NVIDIA, Cepton aims to expedite the development of lidar-based assisted and autonomous driving functionalities for global customers. DRIVE Sim enables users to easily work with Cepton’s lidar models in a virtual setting to prototype a range of sensor integration options, visualize lidar scan patterns in different environments and optimize their sensor configurations with tunable settings such as frame rate, field of view and range of interest.

In addition to helping users identify the right Cepton lidar sensors for their specific application needs, simulation also helps developers integrate Cepton lidars with other sensor modalities, such as camera and radar. Beyond testing and integration, Cepton’s lidar models can be used with NVIDIA DRIVE Replicator to generate synthetic datasets for training AI Deep Neural Networks for multi-sensor fusion and AV perception.

Cepton’s patented lidar technology enables an optimal balance between performance, reliability and cost efficiency. Cepton offers a comprehensive portfolio of lidar solutions, including lidar sensors for near-range to ultra-long range and intelligent perception solutions for automotive and smart infrastructure applications. With their compact size and seamlessly embeddable design, Cepton lidars can be integrated in a variety of vehicle locations without disrupting the appearance of modern passenger cars. Multiple simulated lidar integration options will be available as part of Cepton’s lidar models on NVIDIA DRIVE Sim.

Arbe Robotics Deployed by HiRain

Arbe Robotics Ltd. (NASDAQ: ARBE) a global leader in Perception Radar Solutions, announced that HiRain Technologies, the leading Chinese ADAS Tier 1 supplier, was selected by the Port of RiZhao in Shandong Province to provide perception radars based on Arbe’s chipset. The deployment has been implemented on FAW Trucks and on automated guided vehicle (AGVs), providing autonomous driving capabilities, advanced perception, and true safety. The first deployment started at the RiZhao port and is expected to expand to additional ports across China.

Earlier this year, HiRain announced that it is undertaking major OEM and autonomous driving projects with the Radar Solution it developed using Arbe’s Perception Radar Chipset, projected to reach mass production by 2023. According to analyst firm IHS Markit, autonomous trucks will transform the logistics industry, reducing the cost significantly of transporting goods. The autonomous trucking industry is looking to technology to increase truck safety since many collisions are caused by distracted or impaired drivers. Autonomous trucking technology is designed to drive safety on the road and in industrial environments by providing a 360-degree surround view of vehicles, pedestrians, and obstacles and enable a faster reaction time than the human driver.

TIER IV Starts HDR Camera Production

TIER IV, an open source autonomous driving startup, is pleased to announce that it will start mass production of new Automotive HDR Cameras optimized for autonomous driving applications in October 2022. Sales targeting a wide range of customers in the market will be consigned to five distributors: Aisan Technology, ASMEC, Okaya & Co, PALTEK CORPORATION and Ryoden Corporation.

Background

Camera technology plays a crucial role in advanced driver assistance systems and Level 2 autonomous driving systems. It is also indispensable for the invention of Level 3 and Level 4 autonomous driving systems.

TIER IV has been engaged in the development of automotive cameras that outperform standard vision cameras in autonomous driving systems by applying extensive knowledge and advanced software design know-how acquired through a number of international flagship projects. Tests conducted for more than a year on this newly developed camera have demonstrated outstanding performance with regard to object detection and scene understanding in particular, which has ushered in its use as a standard camera for autonomous vehicles. TIER IV has been receiving many inquiries about its possible use not only in the area of autonomous driving but also for various purposes such as autonomous mobile robots, security and surveillance systems.

Product features

In addition to image quality, this camera boasts superb performance compared with existing commercially available cameras. TIER IV also offers an extensive array of support that will enable users to easily develop applications on their own.

Image Quality Features

High dynamic range (HDR)

–  Enables users to get images without blown out highlights or crushed shadows even in high-contrast scenes with high dynamic range of up to 120dB

High sensitivity and low
noise

–  Suppress noise even in low-light environments for high-quality images 

LED flicker mitigation

–  Mitigates flicker caused by LED traffic lights that have become more common in recent years

Other Features

Optical design and pre-shipment adjustments

 

–  Uses only high-quality automotive lenses that minimize defocusing owing to temperature changes

–  The lens is fixed by enabling high-precision adjustment of the focal position as well as the center, rotation and tilt of the lens and the image plane using a five-axis active alignment system. Additional work such as individual focal adjustment is not necessary because the camera is shipped while uniformly maintaining high optical performance.

Built-in image signal processor (ISP)

–  As the built-in ISP is capable of white balance adjustment and HDR rendering, processing load on the host side is reduced.

–  Supports various kinds of correction including lens distortion correction 

External synchronization function

–  Enables shutter control according to synchronization signals from the host (supports synchronization by FSYNC signals based on GMSL2 protocol)

Superior environmental durability

–  Guaranteed operating temperature range: ‒40°C to 85°C

–  Waterproof and dustproof performance equivalent to IP69K

Maintenance of automotive quality

–  Uses parts that meet AEC standards (Q100, Q101, and Q200)

–  Automotive grade manufacturing management is ensured and shipment inspections are conducted, both of which conform to IATF16949 and ISO26262 standards.

Support Features

Driver support

 

–  Supports Linux Kernel Driver (supporting major Jetson products manufactured by NVIDIA, ROScube-X manufactured by ADLINK Technology, and other industrial ECU products)

–  Supports ROS1/ROS2 Driver

Image quality adjustment

–  Image processing parameters adjusted on the basis of data from diverse practical environments such as public road tests, and simple image quality adjustment via API is also supported.

Connectivity with Autoware*

–  Enables smooth cooperation with camera-based recognition functions included in Autoware and Web.Auto, the development and application platform offered by TIER IV.

* Autoware is a registered trademark of the Autoware Foundation.

Future Development

In the future, in order to respond to a wide range of customer needs, TIER IV will commercialize a GMSL2-USB3 conversion kit so that customers can use this camera through a USB connection just like conventional Web cameras. The company will also add to its product lineup a new 5.4 megapixel high-definition model (C2) that uses Sony’s IMX490 CMOS image sensor. Furthermore, TIER IV is developing autonomous driving system development kits that are integrated with LiDARs and ECUs. By providing these products to the market, the company will step up initiatives to realize its vision “The Art of Open Source, Reimagine Intelligent Vehicles.”