In autonomous and self-driving vehicle news are May Mobility, Nullmax and TIER IV.
10K Riders goMARTI with May Mobility
May Mobility, a leader in the development and deployment of autonomous driving (AD) technology, announced that more than 10,000 riders have used its goMARTI autonomous vehicle service in Grand Rapids, Minn. since its launch in September 2022. Grand Rapids is a rural community known for extreme winter weather patterns and steep snow banks. This milestone demonstrates May Mobility’s ability to operate under challenging and diverse circumstances while becoming a trusted and vital resource for the community.
May Mobility, the Minnesota Department of Transportation (MnDOT), The PLUM Catalyst and the city of Grand Rapids partnered to deliver an accessible transportation option that augments existing transportation. By providing wheelchair-accessible vehicles and operating in the evenings and on weekends, May Mobility’s autonomous vehicle offerings have earned the rural community’s trust and acceptance. According to a series of MnDOT rider surveys, 98% of riders expressed positive feelings toward the service, up 23% since its launch. Additionally, 98% of riders reported feeling safe in the vehicles, leading to a 30% increase in their comfort levels with the technology compared to pre-ride responses.
A greater sense of safety and comfort can be attributed in part to May Mobility’s unique approach to autonomous technology. While many autonomous vehicles rely exclusively on large numbers of pre-programmed examples to “learn” how to drive, May Mobility’s autonomous vehicles learn how to handle never-before-seen situations using the company’s Multi-Policy Decision Making (MPDM) technology. MPDM performs real-time reinforcement learning, which allows the vehicle to make educated decisions. It does so by analyzing thousands of possible scenarios every second and discarding potential maneuvers that would put anyone in harm’s way for a safer and more comfortable ride.
Grand Rapids provided May Mobility the opportunity to study operations in harsh winter climate conditions. With thousands of petabytes of data collected from more than 160,000 miles of driving, the goMARTI service has played a crucial role in further improving the technology. The data collected during snowy conditions has been valuable in developing robust algorithms that contribute to safer autonomous driving in varying environments across May Mobility’s current and upcoming sites.
“Our autonomous vehicles handled the challenging weather very well, showing that MPDM was able to handle situations outside our training set,” said Edwin Olson. “With the priceless data from these 10,000 riders, we will be able to continue to improve our goMARTI service and help communities across the country solve some of their hardest transportation challenges.”
The goMARTI service is free to use and its route includes approximately 70 pick-up and drop-off points including grocery stores, medical sites, and community and recreation centers. May Mobility’s fleet of five autonomous vehicles includes three that are wheelchair-accessible, providing a much-needed transportation option to people with mobility disabilities. Currently, MnDOT, The PLUM Catalyst and May Mobility are working to expand goMARTI’s hours of operation and service area, and plan to add additional wheelchair-accessible vehicles to its fleet.
“I have an autoimmune disorder called lymphedema. It’s really hard to walk around without my legs stinging,” said goMARTI’s 10,000th rider, Kaylien Miller. “If I didn’t have goMARTI, then I’d constantly have to depend on my parents for rides to work, rides around town, and to hang out with my friends because I don’t have a license.”
Additional data regarding goMARTI ridership:
- Over 7,500 autonomous rides have been successfully completed, and 29% of these rides carried two or more riders
- Of the 10,000 riders, 91% are repeat customers
- The average ride rating is 4.95 out of 5 stars from over 1,200 reviews
- 8% of rides included WAV (wheelchair-accessible vehicle) requests
- The program doubled in ridership in the past seven months since its launch in September 2022
- According to riders, the top three reasons for using the service are for daily errands, commuting to work and leisurely activities
- Based on rider feedback, the newly added Target department store has rapidly become one of the most popular stops
Nullmax Unveils Nullmax Intelligence
Nullmax, an AI firm specializing in autonomous driving, hosted its 2024 tech conference to officially unveil the new generation of autonomous driving technology, Nullmax Intelligence (NI). This new technology features vision only, map-free, and end to end multimodal capabilities to advance automotive intelligence.
The NI System includes an innovative multimodal model and a brain-inspired safety model, endowing vehicles with sensory capabilities akin to seeing, hearing, and reading. It outputs visual results, scene descriptions, and driving behaviors. With this system, Nullmax aims to achieve full-scenario autonomous driving by 2025 and expand AI capabilities to fields such as passenger transportation, cargo delivery, and robotics, enabling interaction with the physical world through visual observation and cognitive thinking.
Advancing Intelligence and Accelerating Evolution
In recent years, automotive intelligence has rapidly developed, with autonomous driving application scenarios gradually expanding and advanced functions applied urban environments. However, challenges such as heavy reliance on rules-based programming, poor generalization, high costs, and rigid performance have limited the widespread adoption and scale of autonomous driving.
For instance, urban navigate on pilot often demonstrated cautions and rigid behavior, heavily depending on LiDAR and HD map information, which limits their applicability to specific regions or roads. Additionally, high-end functions are typically limited to luxury or premium vehicle models. Similarly, the range of application of unmanned driving applications remains limited, hindering their value expansion.
At the launch event, Nullmax introduced the new generation of autonomous driving technology, Nullmax Intelligence. This system addresses industry challenges in a smarter, more human-like manner. Beyond visual inputs, the NI System supports the integration of sound, text, and gesture information through end to end multimodal model inference. It also features a brain-inspired neural network for safety. To this end, the NI system capable of outputting visual perception results, scene descriptions, and driving behavior information.
The NI System’s architecture allows it to process various inputs like images, sounds, and texts similarly to human cognition while possessing biological instincts to react to environmental conditions. This results in higher levels of safety, intelligence, and flexibility.
Nullmax Intelligence integrates high-level research in static perception, dynamic perception, and temporal fusion, including works accepted by top computer vision conferences like CVPR 2024 and ECCV 2024. It deploys Yan1.2, the first non-Attention mechanism general-purpose large multimodel model in China on vehicles, and collaborates with YanSi Brain-inspired Research Institute to construct a brain-like neural network.
Pure Vision, Map-Free, and Multimodal Model
A key feature of the Nullmax Intelligence is its ability to support vision only, map-free, and multimodal solution for full-scenario autonomous driving applications. Without relying on LiDAR or stereo cameras, Nullmax can perform precise obstacle detection and 3D reconstruction using vision perception and generate real-time local maps for navigation, achieving true mapless operation without high-precision maps.
The large multimodal approach centers mainly build up on vision, and with other sensor inputs, capable of outputting various information such as static and dynamic perception, scene language descriptions, and driving behavior actions. This comprehensive capability offers exceptional generalization, supports full-scenario applications, and requires less computational power, with sparse computing power under 100T sufficient for full-scenario driving conditions.
TIER IV Partners with Nihon Kotsu for AI Development
TIER IV, the pioneering force behind the worlds’ first open-source software for autonomous driving, has partnered with Nihon Kotsu on a large-scale data-sharing initiative aimed at scaling AI development. From July 2024, the companies will jointly collect data from vehicles equipped with a data recording system (DRS) developed by TIER IV.
TIER IV spearheaded the Co-MLOps (Cooperative Machine Learning Operations) data-sharing initiative in 2023, and unveiled a proof of concept test at CES 2024 that featured data collected in collaboration with partner companies from eight global regions. The company has been driving data collection efforts in major areas of Japan, verifying the performance of its DRS and other features of the Co-MLOps Platform. With DRS-equipped vehicles on the road and the basic functionalities for efficient data collection using active learning frameworks in place, TIER IV is poised to expand its dataset significantly in collaboration with Nihon Kotsu.
DRS overview
TIER IV’s DRS includes multiple automotive-grade LiDARs, high-resolution cameras and electronic control units (ECUs), enabling it to capture comprehensive and precise data on a vehicle’s surroundings and operation. With seamless synchronization and calibration between sensors and ECUs, the system is equipped for the high-quality data collection necessary for AI development for autonomous driving.
Enhance efficiency with Co-MLOps Platform
Data uploaded to the cloud-based Co-MLOps Platform undergoes quality checks, anonymization for secure sharing, and tagging for efficient retrieval. Annotations are prioritized based on assessments from active learning frameworks, focusing on data that significantly enhances AI performance. This approach accelerates the MLOps process, fostering efficient AI development for autonomous driving.
“The data collected through this collaboration will have a wide range of applications, significantly enhancing the accuracy of autonomous driving AI while advancing efforts toward the deployment of robotaxis and the mass production of software-defined vehicles (SDVs),” said Shinpei Kato, founder, CEO and CTO of TIER IV. “We will continue to collaborate with partner companies around the world to conduct ongoing data collection, accelerating the development of a large-scale shared data platform.”
“We are very honored to be working with TIER IV to build “the future of mobility”,” said Ichiro Kawanabe, Director of Nihon Kotsu. “Nihon Kotsu will contribute to improve the safety of automated driving and its implementation by using our 96 years of operational know-how. And we are excited to participate in the mission to connect the next generation and create a legacy of society. All of us expect that the advancement of automated driving technology will improve hired cars and cabs, and will realize safer and higher quality transportation.”
Future plans
TIER IV will lead data collection efforts, using Nihon Kotsu vehicles, primarily in Tokyo through the end of 2024. The company aims to create an annotated dataset exceeding 200,000 frames by the end of 2024, building upon previously collected data. Looking ahead to 2025 and beyond, the company plans to expand its fleet, aiming to gather a diverse and comprehensive dataset across various regions to fuel advancements in autonomous driving technology. TIER IV will also explore opportunities for mass-producing DRS, adapting it for future data collection initiatives, and expand its network to cover a wider range of areas in collaboration with partners.