@CES Autonomous: TIER IV, Mobileye & Lotus Robotics

At CES there were autonomous related announcements from TIER IV, Mobileye and Lotus. TIER IV started the Co-MLOps project. Mobileye’s three key platforms – Mobileye SuperVision™, Mobileye Chauffeur™ and Mobileye Drive™ –will be deployed in ICE vehicles. Lotus is offer robotics software, cloud and monitoring.

TIER IV Start Co-MLOps

TIER IV, a pioneer in open-source autonomous driving (AD) technology, announces the initiation of the Co-MLOps (Cooperative Machine Learning Operations) Project. This new endeavor is aimed at scaling the development of AI (Artificial Intelligence) for autonomous driving. The deployment of the Co-MLOps Platform, developed under this project, will enable the global sharing of managed sensor data, including camera images and LiDAR (Light Detection and Ranging) point clouds, sourced from various regions. Furthermore, the Co-MLOps Platform will offer MLOps functions and Edge AI reference models, empowering partner companies to enhance their proprietary AI for autonomous driving.

TIER IV is set to exhibit edge AI models at CES 2024 in Las Vegas, Nevada, from January 9-12, following a successful initial proof of concept (PoC) test conducted in 2023 across eight global regions, including JapanGermanyPolandTaiwanTurkey, and the United States. The PoC test utilized video data collected from these regions to evaluate the perception capabilities of multitask Edge AI models, which have been optimized to operate at less than 10W of power consumption. This showcase at TIER IV’s booth will highlight the fruits of this innovative project.

Conventional Challenges

In the development of AI for autonomous driving, large datasets are essential to achieve competitive performance levels. Historically, companies and research institutions have independently collected data and engaged in similar technology development, leading to overlaps in database construction and development processes. Furthermore, limited resources for setting up development environments and data collection have made it challenging for some companies to implement development processes that are robust enough to achieve desired performance levels. This has impacted the scalability of technological development across the industry.

Platform Overview

TIER IV is leading the development of the Co-MLOps Platform, set to serve as a foundation for various companies and research institutions to partake in the development of AI for autonomous driving at an industry-leading level. This initiative aims to catalyze technological development that was previously hindered by insufficient scalability and to foster open innovation through open collaboration. The platform will be structured to fulfill the following objectives:

  • Ensure that data collected by companies is shared with appropriate privacy and security safeguards.
  • Manage the utilization of MLOps functionalities, crucial for the development of AI for autonomous driving, on the large-scale datasets shared among companies.
  • Provide opportunities for companies to utilize Edge AI reference models, created using common MLOps functionalities, in their own development of AI for autonomous driving.

The development of the platform is being propelled by leveraging the diverse services offered by Amazon Web Services (AWS). AWS’s key services such as storage, databases, and computing, as well as its extensive global infrastructure, will serve as the cornerstone for the efficient and stable operations. The platform will also adopt a cloud-native approach, eventually supporting updates to the recognition models via Over the Air (OTA). By leveraging the power of AWS, the Co-MLOps Platform will offer cutting-edge technology and a stable infrastructure, thereby accelerating the development of innovative autonomous driving AI.

The platform fosters differentiation by empowering participating companies to retain their proprietary technologies, functional safety and development processes, and quality control measures. Additionally, integrating the outputs developed on this platform with the Open AD Kit, defined by the Autoware Foundation based on the SOAFEE framework, will expedite software development towards SDV mass production while maximizing the utilization of the Arm® Automotive platform.

“We believe this project will catalyze new collaborations and competitions in AI development within the mobility industry, leading to various innovations, particularly in recognition technologies,” said Shinpei Kato, founder, CEO and CTO of TIER IV. “Through collaboration with many partners, we aim to develop the world’s leading AI technologies for autonomous driving and promote the rollout of safe and reliable AD technologies.”

“We anticipate that the construction of a world model will be further boosted by this project, by utilizing a diverse range of large datasets collected worldwide,” said Professor Yutaka Matsuo of the University of Tokyo’s Graduate School of Engineering. “The aim is for our joint research to lead to further development of practical applications for autonomous driving through the integration of generative AI technology. We aim to introduce new methods that will enhance the performance of autonomous driving AI.”

“We support the vision of this project that aims to solve common challenges in the mobility industry and accelerate the creation of innovation,” said Bill Foy, Director of Automotive Solutions and GTM at AWS. “By providing comprehensive support using various AWS services and global infrastructure, we will contribute to the long-term success of this project.”

“Software is changing what it means to own a car today, and to deliver a software-defined vehicle to mass markets requires expertise and collaboration from across the industry, such as through initiatives like SOAFEE,” said Robert Day, SOAFEE SIG Governing Body representative and director of automotive partnerships, Automotive Line of Business, Arm. “The Co-MLOps platform project is another important example of leveraging expertise from across the industry to encourage and further accelerate the development and deployment of software-defined vehicles.”

Future Prospects

In the first half of 2024, TIER IV, in collaboration with its partners, aims to optimize sensor architecture, standardize annotation formats, and develop data search and active learning infrastructure utilizing large language models (LLM). These advancements are expected to enable more efficient and accurate development of AI for autonomous driving. The development of low-power and multimodal AI models for sensor fusion will also allow for the integration of data from various sensors, leading to advanced Edge AI models with highly sophisticated perception capabilities. Furthermore, the generation of learning data using World Models, generative AI, and integration with Neural Simulators will simulate complex real-world scenarios, strengthening the training of AI models.

The second half of 2024 is slated for the commencement of full-scale operations of the Co-MLOps Platform, incorporating these new features. Through this platform, TIER IV seeks to significantly improve the development process of AI for autonomous driving in collaboration with partner companies, thereby accelerating technological advancement in the industry. TIER IV continues to actively seek partners for the development and specification of these functionalities.

Mobileye Wins 17 ICE Models

Mobileye (Nasdaq: MBLY) announced that it has been awarded a series of production design wins by a major Western automaker. Under these design wins, multiple global brands are expected to implement new automated driving solutions using Mobileye’s three key platforms – Mobileye SuperVision™, Mobileye Chauffeur™ and Mobileye Drive™ – for 17 internal combustion and electric vehicle models, which are set to begin rolling out in 2026.

“The pace of innovation has undoubtedly increased and the breadth of this agreement serves as a blueprint for the scalability and customizability of our technology stack, with SuperVision serving as a bridge to eyes-off systems for both consumer-owned vehicle and mobility-as-a-service markets.”

The extensive set of awards includes Mobileye’s unique and innovative software tool that will ensure each brand can maintain the utmost level of customization and personalization in their driving experiences. The premium ADAS and automated solutions are expected to be offered on multiple vehicle platforms across a broad range of geographies and various powertrain types, and can be easily expanded to additional models based on demand.

“These design wins represent an historic milestone in the development of automated driving, and will greatly increase its availability to customers globally,” said Mobileye CEO Prof. Amnon Shashua. “Execution of these production programs will set the standard for software-driven intelligent driving, leveraging the expertise of both companies at volume to serve customers around the world.”

Mobileye will work with the various brands as a Tier 1 to develop new services for hands-off, eyes-on driving on the Mobileye SuperVision platform, leveraging AI-powered surround computer vision and radar that enable navigate-on-pilot functions for highway, rural and urban roads in defined operational domains. These services are currently expected to begin rolling out across multiple markets and regions in 2026.

Mobileye will also work with these automotive brands to implement the Mobileye Chauffeur platform to select models, offering eyes-off, hands-off advanced driving solutions in specified operating design domains. Mobileye enables Chauffeur by adding a second, independent perception system leveraging radar and lidar sensor outputs, as well as additional computing power as needed, to the SuperVision platform, creating a naturally scalable upgrade path for automakers.

The two companies also agreed to bring fully autonomous vehicles into series production. Powered by the Mobileye Drive platform, this program is designed to produce purpose-built vehicles utilized in robotaxi and mobility-as-a-service operations. The Drive-enabled vehicles leverage computer vision, lidar and Mobileye imaging radar, with initial driverless deployments targeted for 2026.

All systems will use the Mobileye EyeQ™6H systems-on-chip designed for powerful but efficient computing to integrate all sensing and REM crowdsourced mapping with safe driving policy.

“Since our founding, we have focused on delivering the safety and convenience benefits of advanced computer vision technology around the world,” said Shashua. “The pace of innovation has undoubtedly increased and the breadth of this agreement serves as a blueprint for the scalability and customizability of our technology stack, with SuperVision serving as a bridge to eyes-off systems for both consumer-owned vehicle and mobility-as-a-service markets.”

Lotus Robotics – Full Self-Driving Software Stack

Lotus Robotics, the self-driving technology unit of Lotus, is showcasing its latest offerings at the 2024 CES in Las Vegas, Nevada.

The company provides a range of autonomous software and services for businesses wanting to deploy self-driving technology in their operations safely, securely and efficiently. These solutions are on display at CES and include:

  • ROBO Soul: full self-driving software stack, which can be integrated into any vehicle, in any environment. The company currently offers an end-to-end solution and can provide up to level 4 autonomy, which means the vehicle can perform driving tasks such as parking and highway driving, under specific circumstances, with human override as an available option. Components of this technology are currently being employed in Lotus’ next generation electric vehicles, its hyper-SUV, Eletre, and hyper-GT, Emeya*.
  • ROBO Galaxy: a range of cloud-based tools that underpins ROBO Soul. It enables businesses to manage and analyse data, in order to increase efficiency and accessibility of their autonomous fleets. Lotus Robotics collects data from multiple sources such as sensors, road information and algorithms, so ROBO Soul can continue to learn and improve its self-driving capabilities throughout its testing and development phases.
  • ROBO Matrix: It uses real-time monitoring to provide drivers with remote safety including guidance, control and parallel driving solutions by providing real-time monitoring. It also deploys AI to continually learn from its environment and improve the safety and accuracy of its self-driving.

Lotus Robotics is currently working with multiple leading automotive brands to enable them to reap the benefits of self-driving technology. To scale its offering and deliver to customers around the world, the company has tapped into Amazon Web Services (AWS). It stores and analyses the data collected across its product portfolio on AWS, and adheres to all regulations in each market it operates in.

The company also offers a range of hardware solutions to increase adoption of self-driving technology, which are being showcased at the event, including:

  • V1: a multi-purpose chassis, which offers unlimited scalability and acts as the foundation of its autonomous driving software. It offers a modular architecture, electric vehicle battery and is available in difference sizes.
  • Robocube: an intelligent cleaning robot that has been designed to create cleaner spaces, improve efficiency and increase safety in cities. It features a full stack of Level 4 autonomous driving software that can be used in any environment. Examples of where it can be used include urban cleaning, controlled traffic areas and sidewalks.

Li Bo, CEO at Lotus Robotics, said: “We are thrilled to be showcasing our latest products in North America for the first time. As adoption of self-driving technology accelerates, we are seeing a strong demand for our solutions here and see this as a key market for us to tap into in 2024 and beyond.”

Feng Qingfeng, CEO, Lotus Group, said: “Lotus Robotics was born out of a desire to transform Lotus into to a global technology brand. We have developed a range of pioneering autonomous products to enable consumers and businesses to realise the benefits of autonomy today, and are excited to scale our solutions globally and reach even more people.”

Lotus Robotics’ booth is located at the AI pavilion at #9217, Tech East, North Hall, Las Vegas Convention Centre.

The company was formed in 2021 and is on a mission to accelerate the transition to self-driving technology today, by creating endless opportunities for how people and goods move using intelligent vehicles and robotics.

It has developed best-in-class hardware, award-winning algorithms and software, and powerful cloud solutions. Lotus Robotics has also won multiple competitions including the CVPR 2023 Online HD Map Construction Challenge and the 2022 Argoverse Motion Forecasting Competition.