Style transfer is an interesting problem in machine learning research where we have two input images, one for content and another for style, and the output is our content image re-imagined with this new style. The content can be a photo straight from our camera, and the style can be a painting, which leads to super fun, and really good looking results.
We have seen plenty of papers doing variations of style transfer, but can we push this concept further?
And the answer is, yes! For instance, style transfer can also be done for the video!
In this new method, we first record a video with our camera. Then, we take a still image from the video and apply our artistic style to it. The style will be applied to the entirety of the video! This new method is an improvement over previous ones, which either take too long or need expensive pre-training steps. Click here for a video.
With this new method, we can just start drawing and see the output results right away. But it gets even better. Due to the interactive nature of this new technique, we can even do this live. All we need to do is change our input drawing, and it transfers the new style to the video as fast as we can draw. This way, we can refine our input style for as long as we wish, or until we find the perfect way to stylize the video. Click here for a video.
It is great to see that this new method also retains temporal consistency over a long time frame, which means that even if the marked-up keyframe is from a long time ago, it can still, be applied to the video and the outputs will show minimal flickering.
We can not only play with the colors but with the geometry too! Look, we can warp the style image, and it will be reflected in the output as well. Click here for a video.
The authors did this work by utilizing a Patch-Based training strategy and performed the task while maintaining temporal consistency. For more details of the approach used, visit here.
By the way, this technique also comes with some limitations. For instance, there is still some temporal flickering in the outputs, and in some cases, separating the foreground and the background is challenging.
Author
Shubham Bindal
References
- Interactive Video Stylization Using Few-Shot Patch-Based Training: https://arxiv.org/pdf/2004.14489.pdf
- Website: https://ondrejtexler.github.io/