A team of scientists at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory used a process called Eulerian Video Magnification to amplify color and movement, and they demonstrated how invisible motion is detected in a human subject.
The program was first created as a way to monitor newborn babies without having to come into physical contact with them. Observation led the research team to discover that the algorithm was able detect changes so minute that they are not visible to the naked eye.
The method was applied to other videos and could potentially serve a purpose in search and rescue efforts, according to Prof. William T. Freeman, who led the research team. As reported by the New York Times, he explained that rescuers could tell if someone in danger was still breathing from a distance.
"Once we amplify these small motions, there's like a whole new world you can look at," said Freeman.
The team presented the program last year at the Siggraph computer graphics conference. Co-author Michael Rubenstein said that there was a tremendous response from health care and law enforcement officials inquiring about the program's availability.
The program could potentially be incorporated into Google Glass: "People want to be able to analyze their opponent during a poker game or blackjack and be able to know whether they're cheating or not, just by the variation in their heart rate," said Rubenstein.
The team is currently making improvements in the program and its goal is to eventually make it an app on smartphones.
"I want people to look around and see what's out there in this world of tiny motions," said Freeman. The project is financed by laptop manufacturers Quanta Research Cambridge in Taiwan, the National Science Foundation Royal Dutch Shell and others.