Amazing AI Video Example Demonstrates Video to 70s Cartoon

Check this amazing AI Video creation done using IP-Adapter and AnimateDiff by Nathan Shipley. We look at his post, the technique, and final product.

Amazing Video Example Demonstrates Video To 70s Cartoon With Ai

Amazing Video Example Demonstrates Video To 70s Cartoon With Ai

A post yesterday on Reddit brought to my attention a truly awesome creation made with AnimateDiff and IP-Adapter. It demonstrates using real footage and its transformation to resemble a 70s Cartoon style. With all this, we get to see a narrative that unfolds much more than could have without the additional changes.

Video to 70’s Cartoon with AnimateDiff and IPAdapter. I created an IPAdapter image for each shot in 1111 and used that as input for IPAdapter-Plus in Comfy.
byu/AtreveteTeTe inStableDiffusion

The original post is seen above.

What is IP-Adapter? – AI Video Pre-Training

IP-Adapter is a lightweight adapter that enables a pre-trained text-to-image diffusion model to generate images with image prompts. It is available on GitHub and Hugging Face. The adapter is designed to achieve image prompt capability for the pre-trained text-to-image diffusion models. It has only 22M parameters and can achieve comparable or even better performance to a fine-tuned image prompt model.

Combining AI Video and Human Technique

The author shared in the post how the video was created using AI Video tools but also ‘human’ video editing (and of course being shot and acted by humans). This includes using Background removers in After Effects, editing and replacing certain things, and just editing the video together. It all culminates in an amazing final product.

Take a look At the Video Side-By-Side

Video to 70’s Cartoon with AnimateDiff and IPAdapter. I created an IPAdapter image for each shot in 1111 and used that as input for IPAdapter-Plus in Comfy.
byu/AtreveteTeTe inStableDiffusion

Clever Editing In All the Right Ways

Some of the IPAdapter images

Here we can see a collection of stills from the video, which shows a ‘storyboard’

“Input video is 30fps iPhone footage shot before school one morning and the result is rendered at 10fps. For some shots, I removed the background behind the people and put in rough imagery for what I’d like it to look like.”

The author posted this image as well, showing the background removal process. “This image shows how the background removal process would work to create an IPAdapter image with a specific composition.”

They later went on to add that the background removal was done with After Effects. I can only assume the other editing and transitions were also done utilizing similar ‘human’ means.

All-in-all, with the combination of it all, it transports you quite well from an apparent suburb in need of raking to a desolate (martian?) landscape. The combination of AI Video and Human video editing truly shows how these tools are more than a ‘replacement’ for humans (as too many who don’t understand them seem to fear) and an amazing set of advancements we can use in media creation.

The main problem I feel many have with AI Video, Images, or even LLM is people failing to see them as a tool, and viewing them as a replacement. We are in control as to how these tools are used and therefore have nothing to fear but the users. A knife is not a tool for murder unless in the hands of a murderer.

Background removal process of the AI Video

The Full Video

Truly awesome stuff. Use the links below to see more from this innovator. They have some more interesting things on their website and YT channel.

Leave a Reply

Your email address will not be published. Required fields are marked *