Video is an obvious application of neural style rendering. The naive approach is to render each frame from scratch, independent of the frames before and after it. Because the neural style algorithm is chaotic, slight alterations in input cause dramatic changes to output, causing adjacent frames to have little visual coherence. Such chaotic video is sometimes described as “noisy” and is often undesired.
We suggest a solution to provide visual coherence and reduce "inter-frame noise”:
1. When initiating the neural style convergence process - use a custom “init” image as the beginning state of the search. This “init image” will be created from the previous frame, to influence convergence on a similar (coherent) image.
2. In cases where structures are moving from frame to frame, use optic flow to measure the motion of the pixels in the underlying video. Use that motion to warp the previous frame for use as an init image in the convergence. This technique was used effectively to create the painterly look of the film “What Dream’s May Come”.
3. Use a blend between warped image and underlying image as a parametric control on “inter-frame coherence”.
Video test of Pikazo neural style algorithm. Varying parameter values for coherence, smear, and boil. Testing optic flow.
Uncompressed video: http://www.qarl.com/secret/...
music "colors" by adult fur. http://adultfur.bandcamp.co...