Predicting what a person will do next based on his body language is natural for humans, but not for computers. When we meet another person, they may greet us, shake hands or even fist, we can understand the situation and react appropriately.
Recently, researchers from the Columbia University announced a computer vision technology that uses higher-level connections between people, animals, and objects to give THE computer a more intuitive sense of what will happen next.
“Our algorithm is a step towards artificial intelligence to better predict human behavior and better coordinate their actions. Our research results will open up more possibilities for human-machine cooperation, intelligent driving, and assistive technologies.” said Carl Vondrick, assistant professor of computer science at Columbia University.
According to the researchers, this technique is by far the most accurate method of predicting video action events in the next few minutes. After analyzing thousands of hours of movies, sports games, and TV shows, the system learned to predict hundreds of activities, from handshakes to kisses. And when it can’t predict specific actions, it will find higher-level concepts to contact them.
Past attempts at predictive machine learning have focused on predicting only one action at a time. The algorithm decides whether to classify actions as hugs, high-fives, handshake, or even non-actions like “ignore.” But when the uncertainty is high, most machine learning models cannot find commonalities among the possible options.
Didac Suris and Rushi Liu, PhDs of Engineering at Columbia University, decided to study long-term forecasting from a different perspective. “Not all things are predictable in the future. When a person cannot accurately predict what will happen, they will act cautiously and predict with a higher level of abstraction. Our algorithm is the first to learn abstraction Algorithms for the ability to reason about future events.”
Researchers say this technology can bring artificial intelligence closer to being able to judge situations and make subtle decisions, rather than pre-programmed actions. This is a crucial step in establishing trust between humans and computers. “Trust comes from robots that truly understand human feelings. If machines can understand and predict our behavior, computers will be able to seamlessly help people in daily activities.” Liu said.
Although the new algorithm makes more accurate predictions on the benchmark task than the previous method, the next step is to verify its work outside the laboratory.
The researchers said that if the system can work in a variety of environments, the possibility of deploying machines and robots is much greater, which may improve our health and safety. The team plans to continue to use larger data sets, computers and other forms of geometric figures to improve the performance of the algorithm.
Founded in 2011, Datatang is a professional artificial intelligence data service provider and committed to providing high-quality training data and data services for global AI companies. Relying on own data resources, technical advantages and intensive data processing experiences, Datatang provides data services to 1000+ companies and institutions worldwide. Datatang entered Chinese stock market (NEEQ: 831428) in 2014 and became the first listed company in China’s artificial intelligence data service industry.
If you need data services, please feel free to contact us: firstname.lastname@example.org