26.09.22 5 mins read
Will Ai Change Motion Design?
Animation or automation?
First, the robots came for your boots, your motorcycle and your clothes; now they want your creative jobs too. But for us in the motion business, should we be worried or excited? Will Ai change motion design for better or worse?
There’s a major buzz surrounding OpenAI’s software DALL·E, the neural network able to create computer-generated artwork via a simple line of text.
The Ai can create just about any image, in any style, combining widely differing subject matter with ease. For example, we asked DALL·E to create ‘a crayon drawing of Elton John riding a surfboard’ and here’s what the machine pumped out. Fridge-worthy artwork, in an instant.
Whether we like it or not, automation is slowly creeping into the creative industry. New, super-smart Ai-based tools like DALL·E are popping up every day. So what could this mean for the motion industry? What other tools are on the market and how could this kind of Ai technology be employed in our workflow?
1. Generating Reference Imagery
Creative Director Ryan Summers describes how he had ‘the sneaking suspicion that not only would (DALL·E) make for an interesting concept artist, but also an idea generator for storyboarding and plot development.’
He used DALL·E to develop a 16 frame Ai-driven set of storyboards titled ‘Looking for Luke in All the Wrong Places’.
From just a few lines of text, we now have a series of frames bursting with references for colour, ideas for composition and layout. We could be looking at stills from a Star Wars film directed by the Safdie Brothers… which we would love to see.
2. Applying A New Aesthetic to Footage
Turning live footage into a sequence that mimics animation isn’t anything new.
A Scanner Darkly achieved this years ago. So too did Loving Vincent. But Scanner took a large team of animators around 18 months to complete. And Vincent relied on a team of more than 100 artists, painting individual frames of footage onto glass.
With the recent advances in machine learning, we now have the ability to run footage through a computer along with a text prompt or reference image, directing the Ai to transform the footage into any style or aesthetic we’d like.
Ai software Stable Diffusion boasts image-to-image rendering. It allows a user to upload an image, and manipulate this with a text prompt. If they went on to upload individual frames from a film and ask the software to render those frames in the style of Wallace and Grommit, you’d get something similar to the example below (by, aptly named, Weird Stable Diffusion Creations). It’s worth noting that Stable Diffusion is one of the only major Ai tools on the market that is open source and completely free. Go Stable!
A notable mention in the field of applying a new aesthetic to footage goes to EbSynth. The breakdown below displays the process of combining video footage (left), with a stylized keyframe (right) to create a combination of the two.
3. Ai Script Writers
Every animation needs a script. But how much longer will these be typed out entirely by humans?
In order to help illustrate what is possible in the world of copywriting and Ai, I asked Ryte, an Ai copywriter to say a few words about the subject. Here’s what it had to say:
“We should not think of these Ai writers as a replacement for human copywriters. They just provide assistance to the content writers by getting rid of writer’s block and generating content ideas at scale.” – Thanks Ryte.
Jasper is an Ai content writing tool that uses GPT-3 technology, a language model that uses deep learning to produce human-like text. Jasper not only writes scripts but can brainstorm video topic ideas, propose click-worthy youtube titles, and write unique video descriptions. Amazingly, all aimed at driving engagement and pleasing the algorithm gods.
4. Generating Variants
Quantum Mirror is an app created for Sony cameras that allows photographers to submit photos directly from their camera to DALL·E. The Ai generates fictional variations of the photograph that the creator Nicholas Sherlock describes as revealing an alternate world that might have been. The software provides fictional new angles, new lighting and generates new subjects and forms within the image.
The application of using machine learning to generate variants will no doubt work its way into the animation industry. Variants of character walk cycles for crowd scenes, generating alternate trees to fill a wide angle shot, variations of a wipe transition etc. The possibilities are endless.
5. Converting 2D images into 3D assets
Kaedim is a tool that uses Ai algorithms to convert 2D images into clean, ready-to-use 3D assets in seconds. The tool is designed to remove the long, laborious process of 3D modelling and provide an alternative approach. Perfect for rapid prototyping and iteration at speed.
The tests above from 3D artist Emmanuel, show the fidelity of 3D model the tool can create from a simple 2D image.
What’s next for Ai in motion design?
These tools are improving by the day, and they are certainly here to stay. It’s yet to be seen if these Ai tools have the power to change the motion industry. But they certainly provide an exciting range of new techniques to push new ideas and open up new avenues for creativity. It may be the machines who are receiving the limelight at moment, but it’ll be the innovative ways in which people use them that let these advances in technology really shine.