Autonomous & Self-Driving Vehicle News: Aurora, Toyota, U Power, aiMotive, Bridgestone, Luminar, Deepen AI & BMW

In autonomous and self-driving vehicle news are Aurora, Toyota, U Power, aiMotive, Bridgestone, Luminar, Deepen AI and BMW.

Aurora Shows Toyota S-AM for Aurora Connect

Aurora Innovation, Inc. (NASDAQ: AUR) has unveiled its test fleet of its autonomous custom-designed Toyota Sienna vehicles featuring Toyota’s Vehicle Control Interface (VCI) and “Sienna Autono-MaaS” (S-AM) platform. The Toyota S-AM will serve as the backbone platform for the expected launch of Aurora Connect, its autonomous ride-hailing product.

Aurora has worked with Toyota Motor North America’s (Toyota) engineering team over the last year to establish and refine requirements to prepare this vehicle model platform to integrate with the Aurora Driver. Since unveiling its prototype last fall, Aurora has further refined the Aurora Driver hardware while Toyota built a larger fleet of platform vehicles at its facilities, customized for the requirements of its customers, including Aurora.

Aurora is autonomously testing the fleet on highways and suburban streets in Texas, where the Aurora Driver regularly handles Texas U-turns, high-speed merges, and lane changes, including those in response to vehicles on the shoulder. The Aurora Driver is also able to react to various forms of construction, stop-and-go traffic, inclement weather, and can detect pedestrians, motorcyclists, traffic lights, and more.

In honor of the milestone, Toyota executives were invited to be the first to experience the Aurora Driver in the Toyota S-AM. The riders were picked up at Toyota’s Headquarters and then driven autonomously on a portion of the route that would normally be taken to the Dallas Fort Worth International Airport. The route showcases Aurora’s ability to safely operate at highway speeds, a key technical differentiator that allows it to prioritize popular and lucrative rides, like trips to the airport, when it launches Aurora Connect.

“We congratulate Aurora on reaching their milestone of integrating its Aurora Driver technology onto our Toyota Autono-MaaS platform vehicle,” said Ted Ogawa, President and CEO of Toyota Motor North America. “The route represented what we would expect going to the airport in the future, and we look forward to seeing Aurora’s future deployment plans.”

“Toyota’s engineering team is truly world-class. Experiencing the result together this week was special and is a testament to our progress and respect for one other,” said Sterling Anderson, Chief Product Officer & Co-Founder at Aurora. “We’ve designed and delivered a purpose-built test fleet specifically for a ride-hailing experience that’s comfortable, convenient, and safe, and we look forward to sharing more on our progress soon.”

Aurora’s investment in a Common Core of Technology allowed this fleet to “inherit” all of the learnings and capabilities of Aurora’s next-generation trucks. In fact, the fleet of modified Toyota Sienna vehicles achieved “parity” with Aurora’s trucks within just six weeks of commencing on-road testing. Aurora plans to continue adding

U Power HPVCI Integrates with NVIDIA DRIVE Hyperion AV Platform

U Power announced it will build a high-performance vehicle computer (HPVC) for its UP SUPER BOARD, which integrates the high-performance compute architecture of the NVIDIA DRIVE Hyperion AV platform. Tapping the power of the NVIDIA DRIVE Orin system-on-chip, U Power plans to develop one of the industry’s first open vehicle supercomputers for scalable computing power.

U Power’s UP SUPER BOARD pushes the boundary significantly beyond the industry’s current Skateboard chassis capabilities. Developed to meet the rapidly changing requirements for a customized experience, it integrates core capabilities of smart EVs, including E-propulsion, suspension, steering, smart driving and thermal management. At the core of the smart driving system is the pluggable HPVC, to which chips can be added or removed, delivering computing power of up to 1,000 trillion operations per second (TOPS) and beyond.

This highly scalable compute power enables the UP SUPER BOARD to address Level 2 to Level 4 autonomous driving needs. End users will also be able to continually upgrade their vehicles’ compute power.

“The collaboration with NVIDIA will greatly enhance UP SUPER BOARD’s intelligence and advanced driving capabilities. With even greater eco-competence, we will be better positioned to help our customers build first-class smart EVs,” said Paul Li, Founder and CEO of U Power.

“NVIDIA is committed to an autonomous future built on AI and accelerated computing,” said Rishi Dhall, Vice President of Automotive Business at NVIDIA. “NVIDIA is helping U Power create a new sustainable model for developing and manufacturing smart EVs via innovative skateboard           chassis technologies.”

U Power’s focus on innovating the skateboard chassis is enabling it to build EVs for every purpose and individual. As a leader in China’s skateboard chassis market, U Power launched UP SUPER BOARD and UP SPACE in

January to empower car makers with greater freedom.

aiMotive Intros aiData for Early Access

aiMotive, the largest independent technology team working on automated driving technologies, announced the newest addition to their product portfolio, aiData, an integrated, cost-efficient, data-driven pipeline for automated driving. This tool had been used for years for the in-house development of aiDrive; now selected partners and customers can apply for early access.

There are many challenges the industry faces when it comes to data:

  • Data collection: one not only has to meet technical prerequisites (such as calibration and synchronization) but also to deal with the fact that more than 95% of the collected data will be unwanted surplus without any additional quality improvement
  • Data labelling: manual annotation is incredibly expensive, while complex, AI-based higher-level AD features need a vast amount of training data under various operational domains
  • Data management: efficiently harvesting, storing, managing, and using vast amounts of data is key, but traceability is also fundamental to safe software development and maintenance – neither is an easy task to manage

aiData has all the answers to these challenges: it can efficiently and automatically collect, process, and query multisensory data for Deep Neural Network-based product development. aiData contains five proprietary tools that can reduce the complexity of processing data with a high level of automation while ensuring the traceability required for automotive software development:

  • aiRec: automated data collection focusing on gaps and edge cases (reference sensor design, calibration toolkit, recording and data ingestion software solution)
  • aiNotate: multi-sensor AI-based automatic annotation for dynamic and static objects with industry-leading precision
  • aiFab: synthetic data generation with high-fidelity sensor simulation enhanced by an AI-based reality filter, achieving realistic sensor data for training machine learning applications. aiFab is based on aiSim’s rendering and scenario technology, and enhanced with the necessary tools for generating vast amounts of virtual sensor and ground truth data
  • aiMetrics: integrated metrics evaluation which tracks development progress against requirements, providing real-time insights and data gap analysis
  • aiDVS: Data Versioning System enabling the precise measurement of the effects of adding new data to fill gaps and tracking the usefulness of collected data

Developing automated driving requires a complete, mature toolchain to collect, generate, use, and validate the data needed for a safe and robust solution. The modules and tools we have developed not only enable our partners and customers to use a competitive, scalable system but does so in such a way that all the data remains with the customer, royalty-free!

aiData is available now.

Bridgestone Invests In May Mobility

Bridgestone Americas (Bridgestone), a global leader in tires and sustainable mobility solutions, announced a minority investment in May Mobility, marking the company’s first investment in public-serving autonomous vehicles (AVs). May Mobility is a leader in AV technology, leveraging its innovative Multi-Policy Decision Making (MPDM) system to realize a world where AVs make transportation safer, more accessible, equitable, and sustainable. The new partnership will include the future integration of Bridgestone’s digital and predictive tire-centric technologies into May Mobility autonomous vehicles, granting deeper AV intelligence for even safer and more efficient operation. Through the partnership, Bridgestone will also gain valuable insights into autonomous vehicle operations to improve its core tire products and mobility solutions.////

May Mobility expands the AV technology company’s ability to operate and service its vehicles in new markets through Bridgestone’s nationwide network of more than 2,200 tire and automotive service centers doing business under the Firestone Complete Auto Care, Tires Plus, Hibdon Tires Plus and Wheel Works retail brands. In addition, May Mobility will also be able to leverage the company’s mobile service provider, Firestone Direct, to support their expansion. This will further enhance May Mobility’s ability to scale AV operations for its growing fleet of Toyota passenger vehicles, including the Toyota Sienna Autono-MaaS. May Mobility plans to continue expanding operations in the U.S. and Japan, building sustainable, accessible, affordable AV transit solutions.

Luminar Acquires Freedom Photonics

Luminar (Nasdaq: LAZR), a leading global automotive technology company, announced it is acquiring high-performance laser manufacturer Freedom Photonics. This transaction follows a multi-year collaboration and brings fundamental next-generation chip-scale laser technology, IP, and production expertise in-house for Luminar lidar systems.

Luminar is vertically integrating across core lidar components that will enable low costs, supply chain security and improved performance. This transaction follows the acquisition of subsidiaries Black Forest Engineering for custom signal processing chips in 2017 and Optogration Inc. in 2021 for receiver chips. This integration extends Luminar’s industry leadership on its path towards democratizing safety and autonomy for the automotive industry.

“Component-level innovation and integration is critical to our performance, cost and continued automotive technology leadership. Bringing Freedom Photonics into Luminar enables a new level of economies of scale, deepens our competitive moat and strengthens our future technology roadmap,” said Jason Eichenholz, Co-Founder and Chief Technology Officer at Luminar. “We’ve worked closely with the Freedom team for the past several years. They have proven to be the best in the world for breakthrough semiconductor laser chip technology, where both power and beam quality are needed simultaneously for true high resolution at long range.”

The Freedom Photonics executive team will continue to lead and expand the business upon close of the transaction, which is expected in the second quarter.

“Joining Luminar is the perfect opportunity for Freedom Photonics, providing us an accelerated path to at-scale commercialization of our world-class diode laser technologies,” said Milan Mashanovitch, Co-Founder and Chief Executive Officer at Freedom Photonics. “In addition to helping extend Luminar’s automotive industry leadership, we will continue to serve and grow our broad customer base across other key markets.”

This all-stock transaction will not have a material impact on Luminar’s cash position or share count. Today the company filed a registration statement publicly registering the shares that may be issued in connection with this transaction.

Valeo for MBZ S-Class

The new Mercedes-Benz S-Class is the first car in the world to be equipped with this Valeo technology. In December 2021, Mercedes-Benz received the world’s first internationally valid system approval for conditionally automated driving (SAE-Level 3), meeting the demanding legal requirements of UN-R157 for such a system. If the particular national legislation allows it, DRIVE PILOT is able to operate in conditionally automated driving mode at speeds of up to 60 km/h, in heavy traffic or congested situations and on suitable stretches of motorway. DRIVE PILOT will be available in Germany in the first half of 2022. The next step is clear: the car manufacturer plans to apply for regulatory approval in California and Nevada in 2022.

Valeo SCALA® 2 sees what the human eye, cameras and radars cannot see and adapts to all light conditions and changes according to light levels. It is not blinded by sunlight and can see equally well in total darkness.

It measures the distance to surrounding objects to the nearest centimeter, by calculating the time it takes its laser beam to travel to an obstacle and back again. This enables it to build a complete 3D image of the vehicle’s surroundings. The image, called a “point cloud,” is analyzed by sophisticated algorithms to identify all of the objects, allowing the device to distinguish between moving and static objects. It classifies them into different categories, such as cars, trucks, buses, bicycles, motorcycles, pedestrians, infrastructure, and captures their shape and position.

If the objects are moving, it measures their speed and keeps tracking them, even when they are no longer in the driver’s line of sight. It predicts the objects’ behavior and trajectory. But it doesn’t just detect objects: it also anticipates open space where the car can drive safely. It even spots small objects, for example a tire that has fallen on the road. It is equipped with a specific algorithm that recognizes road markings based on their contrast with the road.

With its software, Valeo SCALA® 2 transforms the raw data from the sensor into useful data. It eliminates any data that could alter its calculations, as if it were filtering the information to validate only relevant data. This enables it to cancel out any “echoes” caused by raindrops on its light pulse so that it can see through the rain and measure the density of a rain shower. Its software even allows it to troubleshoot itself. Its exclusive cleaning and heating system is triggered whenever its field of vision is blocked, by dust or ice for example.

Valeo’s LiDAR is the successful combination of high-precision mechanics, optics and electronics with software, algorithms and artificial intelligence, making it reliable, sharp and intelligent. In addition to its technological leadership, Valeo also leads in manufacturing capabilities. The Valeo Group is currently the only player in the world to produce on a large scale an automotive LiDAR scanner*. Valeo SCALA® 2, central to the Mercedes-Benz DRIVE PILOT system, which helps give back drivers time during their journey.

Deepen AI Enhances Deepen Calibrate

Deepen AI announced a host of enhancements to their sensor calibration tool – Deepen Calibrate – the world’s most advanced calibration suite. These advancements enable Deepen AI to provide greater accuracy and speed to enterprises and start-ups alike.

Targetless calibration is the key to unlocking acceleration and adoption for various autonomous systems, e.g., automotive, drones, rovers, and robots. Deepen AI’s targetless sensor calibration offerings have expanded to:
– Overlapping Camera
– Non-overlapping Camera
– IMU-Vehicle
– LiDAR-LiDAR
– LiDAR-Camera (beta)

The target-based calibration method relies on checkerboards and other types of calibration targets. On the other hand, the targetless calibration approach can use any scene captured in both LiDAR and the camera sensor data.

Deepen AI has also introduced Vehicle to 2D-LiDAR sensor calibration. Users can now easily calibrate, visualize, debug and export error statistics for Vehicle to 2D-LiDAR calibration.

Another key improvement is improved precision and accuracy to all target-based sensor calibrations utilizing newly developed algorithms.

Deepen Calibrate is an easy-to-use web browser-based tool & edge library* that supports both intrinsic and extrinsic calibrations. Deepen Calibrate brings down the time spent on calibrating multi-sensor data from hours to seconds, enabling accurate localization, mapping, sensor fusion perception, and control. Deepen Calibrate also supports sensor pairings for Radar and IMU-based sensors in addition to the already existing camera, LiDAR, and Vehicle calibration algorithms.

*Edge library requires customization on the customer target hardware and sensor suite.

“Targetless calibration is critical to help make the world safer – in line with our mission. With the launch of the new calibration pairs and enhanced features, we have expanded our offerings to multiple new use cases for robotics, automotive, and drones. We are already working with large global enterprises to solve complex sensor calibration challenges. We are excited about the upcoming developments.” – Mohammad Musa, Co-Founder & CEO, Deepen AI

Licensing and customized packages are available to both enterprises and start-ups. More calibration types are being added regularly.

SORDI from BMW

SORDI (Synthetic Object Recognition Dataset for Industries) accelerates artificial intelligence in production

The BMW Group is publishing the world’s largest data set to streamline and significantly accelerate the training of artificial intelligence in production. The synthesised AI dataset – known as SORDI (Synthetic Object Recognition Dataset for Industries) – consists of more than 800,000 photorealistic images. These are divided into 80 categories of production resources, from pallets and pallet cages to forklifts, and include objects of particular relevance to the core technologies of automotive engineering and logistics.

By publishing SORDI, the BMW Group together with its partners Microsoft, NVIDIA and idealworks is making available the world’s largest reference dataset for artificial intelligence in the field of manufacturing. The visual data is of particularly high quality, and the integrated digital labels enable basic image processing tasks to be carried out, such as classification, object detection or segmentation for relevant areas of production in general.

“The BMW Group has been using artificial intelligence since 2019. AI has already been utilised in various quality assurance applications in production at the plants. SORDI, the new, synthetic dataset makes AI models much faster to train and AI considerably more cost-efficient in production,” says Michele Melchiorre, Senior Vice President of BMW Group Production System, Planning, Tool and Plant Engineering.

To create the synthesised AI training data non-manually, the simulated environment for robotics, the digital twin of the production system and the AI training environment were all fused within the NVIDIA Omniverse. The rendering pipeline from the BMW Tech Office in Munich allows any number of photos, including labels, to be synthesised in sufficient photorealistic HD quality for them to be used in the creation of highly robust AI models. SORDI can be utilised by IT professionals to develop and tailor AI solutions for manufacturing, and by production employees to maintain mature AI systems for validation purposes ready for the start of production.

Freely available to software developers, the publication of the innovative dataset represents the next targeted step in the BMW Group’s systematic expansion of activities to democratize artificial intelligence (https://github.com/bmw-innovationlab). The publications of no-code AI and SORDI complement each other: on the one hand, the BMW Labelling Tool Lite and published AI training tools explicitly allow users to use AI intuitively, even if they lack sound IT expertise. On the other, SORDI’s synthesis significantly accelerates and simplifies the training of AI models for production applications.