In autonomous and self-driving vehicle news are NTT DATA, Aurora Labs, MIT, AMD, DENSO, MicroCloud and Cepton.
In this Article
NTT DATA & Aurora Labs
NTT DATA, a global digital business and IT services leader, and Aurora Labs, an automotive artificial intelligence (AI) company, announced their global strategic cooperation in the Automotive Industry, with first joint projects in production and logistics. Companies benefit from an intelligent combination of AI technology and 5G for scalable and agile over-the-air (OTA) software updates with superior efficiency and end-to-end security, including the 5G transport layer.
NTT DATA and Aurora Labs’ intelligent 5G over-the-air update is based on a self-learning artificial intelligence (AI) technology by Aurora Labs and built-in self-optimization. With the combination of AI, which significantly reduces the amount of data that needs to be transmitted, and 5G connectivity, a 25-fold increase in efficiency is achievable and updates of 5000 nodes (cars) per radio cell are possible compared to 1000 end nodes in 4G networks. These revolutionary services help reduce time and cost while also lowering kW/Bit to deliver on our joint commitment to sustainability.
Public 5G networks are currently rolled out area-wide. To supplement this, companies can now build their own 5G network that can be restricted to a dedicated building, factory or lot, providing independent 5G that only authorized subscribers are able to connect to. Private 5G networks provide advanced reliability: uninterrupted connectivity, low latency and high bandwidth. Combining a self-contained network with software update files created by Aurora Labs’ proprietary, non-open-source algorithms results in unparalleled security. Built for evolution and scalability, NTT DATA’s solutions are designed to grow with the client’s demand – achieving continuously increasing speeds and coverage to meet the customer’s over-the-air update needs without increasing the cost of hardware such as antennae, base stations or device memory.
Aurora Labs’ AI-based Vehicle Software Intelligence (VSI) solutions are being used by global automotive and device manufacturers to continuously collect actionable data and obtain a deep understanding of the changes made to the vehicle’s software. Integrating the VSI solution early into the software development lifecycle streamlines the development process and creates the industry’s smallest update files. Aurora Labs’ AI-based Vehicle Software Intelligence offers significant economic benefits to the auto industry with a clear, cost-effective value proposition, saving up to 98% of device hardware and data transmission costs for software updates.
“The key to meeting society’s future mobility needs is to provide the automotive industry with highly efficient, reliable and secure solutions. Through our partnership with Aurora Labs, we combine technological innovation with our industry expertise to meet users’ needs for a better quality of service in a safe and sustainable way”, said Stefan Hansen CEO and Chairman of the Management Board NTT DATA DACH.
“Distributing updates to thousands of vehicles quickly, securely, and reliably via wireless connections is far from trivial, especially as data volumes continue to grow. The AI capabilities of Aurora Labs, the industry experience and the 5G expertise of NTT DATA. have resulted in a solution that scales for future growth,” said Kai Grunwitz, CEO NTT Ltd. Germany.
“Current OTA update solutions cannot support the data required to manage the software lifecycle of Infotainment (IVI) and ADAS systems. OTA is crucial to stabilize the software during the first 12 months after the launch of new vehicles. The capability to iterate fast requires data size efficiency and robust networks. We believe that partnering with NTT DATA. is a game changer for addressing software agility in development, production and on the road,” said Zohar Fox, CEO of Aurora Labs.
“Through this strategic cooperation, we will continue to innovate, accelerating the means of doing business while supporting our clients in tackling the intricacies of delivering seamless and connected vehicle experience,” said Noriyuki Kaya, Chief Digital Assets Officer (CDAO) of NTT DATA Inc.
Energy for AV Computers Could Create More Greenhouses Gases than Gas
In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.
That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.
The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.
The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.
“If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.
Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro and was presented in a TEDx talk.
The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.
The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.
“On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.
For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.
In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.
To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.
When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.
For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).
“After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.
Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.
To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.
One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.
In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.
Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.
While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.
“We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.
This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship.//
Outrider, the pioneer in autonomous yard operations for logistics hubs, today announced it closed $73 million in Series C financing led by FM Capital. New investors include a wholly owned subsidiary of the Abu Dhabi Investment Authority (ADIA) and NVentures, NVIDIA’s venture capital arm. Existing investors participating in the round include Koch Disruptive Technologies (KDT) and New Enterprise Associates (NEA). Outrider has raised $191 million in financing to date based on its technical leadership in autonomous systems for distribution yards.
“Outrider has consistently delivered breakthrough technology to automate one of the most inefficient links in the supply chain – the distribution yard,” said Andrew Smith, CEO and Founder of Outrider. “Our customers will move massive quantities of freight more efficiently, safely, and sustainably using Outrider’s technology. We are thrilled to have an outstanding network of investors who share our vision to set a new standard for the global logistics industry.”
Trucking moves over 20 billion tons of freight each year, and almost all of it passes through distribution yards. While they are critical links in the supply chain, today’s yards still run largely the way they have for decades – filled with repetitive, manual tasks performed in often inhospitable and potentially hazardous conditions. These yards are difficult to staff and create bottlenecks in the supply chain. By automating yard operations, logistics-dependent enterprises will increase the time freight spends moving down the highway, address labor shortages, and allow more people to work in safer environments.
Outrider will use this funding to expand its proprietary autonomy and safety technology portfolio, increase hiring domestically and internationally, and scale its yard automation solution with large customers in package shipping, retail, eCommerce, consumer packaged goods, grocery, manufacturing, and intermodal industries. Outrider’s customers, representing more than 20% of all yard trucks operating in N. America, have invested in joint product testing and pilot operations since 2019.
AMD Powers DENSO LiDAR
AMD announced that its adaptive computing technology is powering leading mobility supplier DENSO Corporation’s next-generation LiDAR platform. The new platform will enable over 20X[i] improvement in resolution with extremely low latency for increased precision in detecting pedestrians, vehicles, free space and more. The DENSO LiDAR platform, targeted to begin shipping in 2025, will leverage the AMD Xilinx Automotive (XA) Zynq™ UltraScale+™ adaptive SoC and its functional safety suite of developer tools to enable ISO 26262 ASIL-B certification.
DENSO is using the AMD XA Zynq UltraScale+ multi-processor system-on-a-chip (MPSoC) platform in its Single-Photon Avalanche Diode (SPAD) LiDAR system, which generates the highest point-cloud density level of any LiDAR system on the market today[ii]. Point-cloud density describes the number of points within a given area and is analogous to image resolution, where richer data ensures that crucial decision-making details are captured. Generally, SPAD-based systems are being adopted by automakers because of the space savings that can be achieved. The highly adaptable XA Zynq UltraScale+ MPSoC enables DENSO’s LiDAR systems to reduce the size of current LiDAR implementations, allowing multiple LiDARs to work in tandem for forward view and side views of a vehicle. One device can be used for multiple DENSO LiDAR systems, including future generations, which drives down system costs and helps designs be future-ready.
Current vehicles in production may have just one forward-looking LiDAR, but next-generation vehicles will have multiple systems including forward-, rear facing- and side-view LiDARs. The additional systems are needed to move beyond driver assistance to full autonomy. DENSO LiDAR can also be used for infrastructure monitoring, factory automation and other non-automated driving applications.
“We are excited to expand our collaboration with AMD as we introduce our next-generation LiDAR system,” said Eiichi Kurokawa, head of Sensing System business unit, DENSO Corporation. “AMD high-performance, highly scalable, programmable silicon offers distinct benefits for the extremely complex image processing requirements of our LiDAR sensor architecture. The flexibility and capabilities of the Zynq UltraScale+ MPSoC platform and its ability to meet stringent functional safety requirements led us to work with AMD.”
DENSO’s SPAD LiDAR can generate over three million points-per-second at 10 frames per-second[iii]. This middle-range LiDAR system uses XA Zynq UltraScale+ MPSoCs for system monitor functionality to help enable the temperature and overall system to function correctly. Because the system uses time-to-digital conversion instead of analog-to-digital converters, the overall system size and cost can be optimized while still delivering high-performance high-density data.
“DENSO has developed an exceptionally precise LiDAR system. With LiDAR continuing to evolve, there are new technology requirements, driving the need for improvements in sensitivity, density and performance,” said Mark Wadlington, senior vice president and general manager, Core Markets Group, AMD. “Through AMD adaptive computing technology we’re helping to enable a reduction in system size and space, while also improving resolution for increased precision in object detection, all at very low latency.”
MicroCloud Hologram Digital Twins 3D Holographic of LiDAR
MicroCloud Hologram Inc. (NASDAQ: HOLO) (“HOLO” or the “Company”), a Hologram Digital Twins Technology provider, today announced that it developed a point cloud denoising algorithm for the real-time 3D holographic reconstruction of single-photon LiDAR data. The algorithm is the result of the Company’s independent research and development, which is conducive to further improving the Company’s intellectual property protection system, maintaining its technological leadership, and enhancing its core competitiveness.
Although 3D holographic LiDAR point cloud imaging continues to evolve rapidly, currently available computational imaging algorithms are often too slow, insufficiently detailed, or require extremely high arithmetic power, and even CNN-based (convolutional neural network) algorithms for estimating scene depth struggle to meet real-time requirements after training. HOLO proposes a new algorithm structure that meets the requirements of speed, robustness, and scalability. The algorithm applies a point cloud denoising tool for computer graphics and can efficiently model the target surface as a 2D manifold embedded in 3D space. This algorithm can merge information about the observed model, such as Poisson noise, the presence of bad pixels, compressed sensing, etc. This algorithm also uses stream modeling tools for computer graphics and can process tens of frames per second by selecting massively parallel noise reducers. HOLO’s algorithm consists of three main steps: depth update, intensity update, and background update.
Depth update: Gradient steps are taken for depth variables with point clouds denoised using the point set surface algorithm. The update is operated in a coordinate system in 3D holographic space. Adaptation is performed on smooth continuous surfaces under the control of the kernel. In contrast to conventional depth image denoising, HOLO’s point cloud denoising can handle an arbitrary number of surfaces per pixel, regardless of the format. In addition, all 3D points are processed in parallel, resulting in short computing times. In addition, all 3D points are processed in parallel, significantly reducing computing time.
Intensity update: Gradient steps are taken by targeting the coordinates of individual pixels in 3D holographic space to reduce noise. In this way, only the correlation between points within the same surface needs to be considered. The nearest low-pass filter is used for each point. This step considers only local correlations and processes all points in parallel. After the denoising step, points below a given intensity threshold, i.e., the minimum permissible reflectance, are removed.
Cepton Funding