Autonomous & Self-Driving Vehicle News: Gatik, Pitney Bowes, Aeva, MIT CSAIL, Nexar & SiLC

In autonomous and self-driving vehicle news are Gatik, Pitney Bowes, Aeva, MIT CSAIL, Nexar & SiLC.

Gatik Provides Autonomous Box Trucks to Pitney Bowes

Gatik, the market leader in autonomous middle mile logistics, announced a multi-year commercial agreement with Pitney Bowes (NYSE: PBI), a global shipping and mailing company that provides technology, logistics, and financial services to more than 90 percent of the Fortune 500. Under the agreement, Gatik will integrate its class 6 autonomous box trucks into the Pitney Bowes ecommerce logistics network in the Dallas, Texas market beginning in Q1, 2023.

Gatik’s autonomous fleet, purpose built for the middle mile, will establish a continuous, operational loop across the Pitney Bowes ecommerce logistics network in Dallas, making multiple deliveries per day with speed and efficiency.

The deployment aims to establish a more responsive and flexible logistics network for Pitney Bowes by delivering excellent service levels and reliability, improving speed of deliveries, and providing end-to-end visibility while lowering transportation costs.

During the initial phase, a safety operator will occupy the autonomous vehicles to monitor performance. Data collected from each delivery will be used to improve network design and identify additional opportunities for cost savings and service improvements as Pitney Bowes looks to integrate autonomous vehicles across its national ecommerce logistics network.

The unique advantages of deploying Gatik’s autonomous middle mile solution will help to future-proof logistics operations, increase asset utilization, and support a shift to a more direct, high-frequency transportation network. The result is a more efficient network and faster, more reliable service for Pitney Bowes clients.

Aeva to Showcase 4D LiDAR

Aeva® (NYSE: AEVA), a leader in next-generation sensing and perception systems, announced it will showcase its 4D LiDAR™ technology for the automotive industry and beyond at the ADAS & Autonomous Vehicle Technology Expo in San Jose from September 7-8, 2022. Aeva will demonstrate its new Aeries™ II sensor and 4D LiDAR-on-chip technology at booth 6010 in the San Jose Convention Center.

Visitors to Aeva’s booth will be able to experience a live demo of Aeries II and learn about the company’s next-generation sensing and perception technology for a range of automotive applications including Advanced Driver Assistance Systems (ADAS) and autonomous vehicles. Aeva will be providing demo drives for attendees to experience 4D LiDAR in real-time and on the road showcasing the unique benefits of Aeva’s technology including instant velocity detection, long range object detection, and Ultra Resolution™, a camera-level image with up to 20 times the resolution of legacy time of flight LiDAR sensors.

Aeries II delivers breakthrough sensing and perception performance using Aeva’s Frequency Modulated Continuous Wave (FMCW) technology to directly detect the instant velocity of each point, with centimeter per second precision, in addition to precise 3D position at long range. Its compact design is 75% smaller than the previous generation Aeries sensor while achieving the strict environmental and operational standards expected by OEMs and automotive customers.

MIT CSAIL Offers New VISTA of Autonomous Simulation Near Crashes

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created “VISTA 2.0,” a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios. What’s more, all of the code is being open-sourced to the public.

“Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary. With this release, the research community will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving,” says MIT Professor and CSAIL Director Daniela Rus, senior author on a paper about the research.

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations. VISTA is open source and the code can be found below.

VISTA 2.0 builds off of the team’s previous model, VISTA, and it’s fundamentally different from existing AV simulators since it’s data-driven — meaning it was built and photorealistically rendered from real-world data — thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one camera sensor, achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized.

Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data.

The team was able to scale the complexity of the interactive driving tasks for things like overtaking, following, and negotiating, including multiagent scenarios in highly photorealistic environments.

Training AI models for autonomous vehicles involves hard-to-secure fodder of different varieties of edge cases and strange, dangerous scenarios, because most of our data (thankfully) is just run-of-the-mill, day-to-day driving. Logically, we can’t just crash into other cars just to teach a neural network how to not crash into other cars.

Recently, there’s been a shift away from more classic, human-designed simulation environments to those built up from real-world data. The latter have immense photorealism, but the former can easily model virtual cameras and lidars. With this paradigm shift, a key question has emerged: Can the richness and complexity of all of the sensors that autonomous vehicles need, such as lidar and event-based cameras that are more sparse, accurately be synthesized?

Lidar sensor data is much harder to interpret in a data-driven world — you’re effectively trying to generate brand-new 3D point clouds with millions of points, only from sparse views of the world. To synthesize 3D lidar point clouds, the team used the data that the car collected, projected it into a 3D space coming from the lidar data, and then let a new virtual vehicle drive around locally from where that original vehicle was. Finally, they projected all of that sensory information back into the frame of view of this new virtual vehicle, with the help of neural networks.

Together with the simulation of event-based cameras, which operate at speeds greater than thousands of events per second, the simulator was capable of not only simulating this multimodal information, but also doing so all in real time — making it possible to train neural nets offline, but also test online on the car in augmented reality setups for safe evaluations. “The question of if multisensor simulation at this scale of complexity and photorealism was possible in the realm of data-driven simulation was very much an open question,” says Amini.

With that, the driving school becomes a party. In the simulation, you can move around, have different types of controllers, simulate different types of events, create interactive scenarios, and just drop in brand new vehicles that weren’t even in the original data. They tested for lane following, lane turning, car following, and more dicey scenarios like static and dynamic overtaking (seeing obstacles and moving around so you don’t collide). With the multi-agency, both real and simulated agents interact, and new agents can be dropped into the scene and controlled any which way.

.Amini and Wang wrote the paper alongside Zhijian Liu, MIT CSAIL PhD student; Igor Gilitschenski, assistant professor in computer science at the University of Toronto; Wilko Schwarting, AI research scientist and MIT CSAIL PhD ’20; Song Han, associate professor at MIT’s Department of Electrical Engineering and Computer Science; Sertac Karaman, associate professor of aeronautics and astronautics at MIT; and Daniela Rus, MIT professor and CSAIL director. The researchers presented the work at the IEEE International Conference on Robotics and Automation (ICRA) in Philadelphia.

This work was supported by the National Science Foundation and Toyota Research Institute. The team acknowledges the support of NVIDIA with the donation of the Drive AGX Pegasus.

Nexar Releases Driver Behavioral Mapping Data

Nexar, a leading AI computer vision company, announced the release of its Driver Behavioral Mapping data. The Driver Behavioral Map gives insight for different road segments, driver types, weather, and road conditions. Nexar’s dash cams capture a wide range of crowd-sourced driving data of actual human driving behavior, and are then aggregated and overlaid on a high definition base map to be used for autonomous and assisted driving.

As local agents who have driven in an area know more than agents who have not, either autonomous or human, AVs will use Nexar’s real-time data to humanize driving by training according to the local driving culture and mapping crucial driving habits. AVs can use these maps to determine when to switch lanes before a turn, how to decelerate when cornering, where virtual stop lines lay, and more.

“A self-driving car that drives only according to a raw map would be an immediate danger due to its robotic style of driving,” said Eran Shir, Co-founder and CEO of Nexar. “It’s not necessarily that humans drive better than robots, it’s that AVs need a lot of human data obtained by those who have driven through a particular area. Without even being aware, we make hundreds of decisions that adapt to local conditions, culture, and comfort each time we get behind the wheel. AVs need to sync into this behavior in order to provide the most secure and comfortable ride.”

Nexar’s Driver Behavioral Map accounts for speed distribution, acceleration distribution, turn probability at intersections, switching lanes probability, and virtual cross-walks, among many others. Providing a smooth and safe ride, the behaviorally trained AVs will better understand, measure, and benchmark safety-related behavior (stop lines, school zones etc.), as well as drive better based on road conditions and visibility. Nexar’s maps will empower AVs with human driving behavior, while having the safety advances of a robot driver.

“Nexar is used by both commercial and ordinary drivers, during all hours of the day, on different road segments, and in all weather conditions,” continued Shir. “There is no more efficient or cost-effective way to collect this diverse set of human driving data that is such a crucial part of our journey to advance a safe future for autonomous driving. With the right data, an AV operating off of Nexar’s Driver Behavioral Map will continue on the road to becoming indistinguishable from a human-driven car in terms of the flow of driving.”

Cars outfitted with Nexar’s smart dash cams drive over 160 million miles per month, delivering valuable driving data all over the US. By accessing a real-time record of how other vehicles drive on the same road segment at different times of day, AVs can ‘see’ what’s ahead, and make use of a map that is constantly refreshed. The maps cover all 50 states.

SiLC 1K Eyesonic Vision Sensor

Furthering its mission to change the state of machine vision, silicon photonics innovator SiLC Technologies, Inc. (SiLC) has announced that its Eyeonic Vision Sensor has demonstrated the ability to perceive, identify, and avoid objects at a range of more than 1 kilometer. Having previously demonstrated a detection range of more than 500 meters at CES earlier this year, SiLC has now optimized its technology to go beyond 1000 meters – a feat that no other company can claim.

Ultra-long-range visibility is a requirement in many industries that utilize machine vision, including automotive, metrology, construction, drones, and more. Specific scenarios include providing enough time for a vehicle to evade an obstacle at highway speeds, enabling a drone to avoid others in the sky, and controlling deforestation by making precision mapping and surveying of forests possible.

Next-gen vision sensors that incorporate millimeter-level accuracy, depth, and instantaneous velocity are key to true autonomous driving and other machine vision applications – and FMCW LiDAR is the optimal technology to make this a reality.

First announced in December of 2021, SiLC’s Eyeonic Vision Sensor is a first-of-its-kind FMCW LiDAR transceiver. The heart of the Eyeonic Vision Sensor is SiLC’s silicon photonic chip which integrates FMCW LiDAR functionality into a single, tiny chip. Representing decades of silicon photonics innovation, this chip is the only readily-integratable solution for manufacturers building the next generation of autonomous vehicles, security solutions, and industrial robots.

“Our technology platform is flexible enough to address ultra-long-range to ultra-short-range applications which speaks to our understanding of what is needed to truly make machine vision as good or better than human vision,” said Dr. Mehdi Asghari, SiLC’s CEO and founder. “The highly detailed, accurate instantaneous velocity and ultra-long-range information that our Eyeonic Vision Sensor provides is the key to helping robots classify and predict their environment – in the same way that the human eye and brain process together.”

SiLC’s achievement of industry-leading long-range detection follows a string of important announcements from the company, including new partnerships with AutoX, Varroc, and Hokuyo.