Self-Driving Cars Learn to Predict Pedestrian Movement
Scientists are using humans' gait, body symmetry and foot placement to teach self-driving cars to recognise and predict pedestrian movements with greater precision than current technologies.
Image for representation. (Image: AFP Relaxnews)
Scientists are using humans' gait, body symmetry and foot placement to teach self-driving cars to recognise and predict pedestrian movements with greater precision than current technologies. Data collected by vehicles through cameras, LiDAR and global positioning system (GPS) allowed the researchers at the University of Michigan in the US to capture video snippets of humans in motion and then recreate them in three-dimensional (3D) computer simulation. With that, they have created a "biomechanically inspired recurrent neural network" that catalogues human movements. The network can help predict poses and future locations for one or several pedestrians up to about 50 yards from the vehicle, at about the scale of a city intersection.
LiDAR is a surveying method that measures the distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. "Prior work in this area has typically only looked at still images. It wasn't really concerned with how people move in three dimensions," said Ram Vasudevan, an assistant professor at the University of Michigan.
"But if these vehicles are going to operate and interact in the real world, we need to make sure our predictions of where a pedestrian is going does not coincide with where the vehicle is going next," said Vasudevan. Equipping vehicles with the necessary predictive power requires the network to dive into the minutiae of human movement: the pace of a human's gait (periodicity), the mirror symmetry of limbs, and the way in which foot placement affects stability during walking.
Much of the machine learning used to bring autonomous technology to its current level has dealt with two-dimensional images -- still photos. A computer shown several million photos of a stop sign will eventually come to recognise stop signs in the real world and in real time. However, by utilising video clips that run for several seconds, the system can study the first half of the snippet to make its predictions, and then verify the accuracy with the second half.
"Now, we are training the system to recognise motion and making predictions of not just one single thing -- whether it is a stop sign or not -- but where that pedestrian's body will be at the next step and the next and the next," said Matthew Johnson-Roberson, an associate professor at the University of Michigan.
"If a pedestrian is playing with their phone, you know they are distracted," Vasudevan said. "Their pose and where they are looking is telling you a lot about their level of attentiveness. It is also telling you a lot about what they are capable of doing next," he said. The results have shown that this new system improves upon a driverless vehicle's capacity to recognise what is most likely to happen next.
Get the best of News18 delivered to your inbox - subscribe to News18 Daybreak. Follow News18.com on Twitter, Instagram, Facebook, Telegram, TikTok and on YouTube, and stay in the know with what's happening in the world around you – in real time.
Recommended For You
- Big Little Lies Actor Zoe Kravitz Joins Robert Pattinson's The Batman as Catwoman
- 'Respect Culture': Woman Arrested for Wearing Bikini That Was Too 'Revealing'
- Why Dyson Air Purifiers Are Smarter Than Most: Better Filters And Smarter Testing
- I-League Clubs to be in ISL as AFC-AIFF Decide on Single League Roadmap
- 15-year-old Coco Gauff Beat Jelena Ostapenko to Win First WTA Title at Linz