Just last month, Google released a beta previewing an AI that classifies videos. Now, roughly a month after, the researchers from the University of Washington was able to defeat it. The trick is to periodically insert a still photo onto the videos and it is enough to lead to an inaccurate result from the AI.
Google describes their technology as a deep-learning classifier that is built using different frameworks like TensorFlow. It is then applied on large scale platforms including YouTube, The Register reported. Nevertheless, the researcher from the university was able to easily manipulate the results.
In their research paper, they explained that the manipulation could go unnoticed by regular viewers of the videos. It was said that the team altered Google's demo video and replaced one frame every couple of seconds with an Audi. This is why while the actual video is about a tiger, with the alterations, the system decided that the media is about an Audi instead.
This research proves that an adversary can easily bypass a filtering system by adding a benign image into a media with illegal contents. The fact that doing this does not require a specialized knowledge about AI machines and their algorithms, makes this loophole particularly disturbing, per Digital Trends. Do note that the Tiger video is not the only video that the research team has successfully manipulated. The researchers also inserted images of pasta bowl into a video of primatologist Jane Goodall and some apes. This resulted in the video being tagged as spaghetti-themed instead of gorillas.
In the end, what the research proves is that AI is still a long way from being able to match humans in things like determining what a video is all about. Subliminal messages in videos may affect human psyche in one way or another but people is very less likely to think that a video with a clear topic is instead about something that is totally unrelated.