In performance marketing, the first 3 seconds of an ad are incredibly important. If the user doesn’t keep watching to hear your message then you effectively lost the sale. One way you can think of a winning creative strategy is ‘How can I get attention and keep it?’
When testing creative on Meta, I like to monitor average video view duration or create a custom metric like ‘Hook Rate’ to measure my ads’ upper funnel performance:
Hook Rate = 3 second video views / impressions
This answers the question ‘What percentage of the market is watching past 3 seconds?’
There is a lot of good advice on how to test hooks to increase your conversion rate. Text hooks like ‘Did you know?’ or ‘This is how…’ or ‘How to…’ can sometimes work on their own to decrease CPA or increase ROAS.
Elite performance marketing teams test visual hooks along with text hooks:
Rocket Money
Calm
Hims
You can test your own visual hooks with stock assets, UGC content, or 3D modeling software like Blender (free, my fave) or Adobe After Effects.
I want to show you a brand-new way to create visual hooks with Generative AI.
What is Deforum?
“a community of AI image synthesis developers, enthusiasts, and artists.”
I briefly covered Deforum in my previous post titled ‘Generative AI for Paid Social’. You can support them here on Patreon - https://www.patreon.com/deforum
They built on top of the open-source Stable Diffusion AI and you can tinker with it yourself using the link below without needing to install anything:
https://replicate.com/deforum/deforum_stable_diffusion
Animation examples
Deforum doesn’t have ‘temporal coherence’ meaning it has no concept of time. If you prompt ‘hand crushing an egg’ it will show an egg in a hand.
Meta and Google employ an army of AI researchers working on the temporal coherence issue with ‘Make-A-Video’, ‘Make-A-Video3D’, ‘Phenaki’, and ‘Imagen Video’. I already covered these in my ‘Generative AI for Paid Social’ post with the exception of MAV3D which just got released.
Video input examples
To work around the temporal coherence issue, Deforum can take a video input and use that as the base layer. Play around with the strength as that is the ‘creative liberty’ setting. 1 means it won’t deviate from the video and 0 means it won’t use the video at all. I use between a 0.5 and 0.7 setting. Set the seed behavior to ‘fixed’.
Takeaways for performance marketers
Remember our mission of winning the attention game?
In 2023, with feeds saturated by UGC, 3D motion graphics, and 2D flat designs Generative AI just paved an unexplored road with infinite possibilities.
Two obvious cases to use generative AI for performance marketing aside from chatGPT:
text2video to create whole new visual concepts from scratch for creative testing
text2video to apply new styles to winning visual hooks/concepts so they don’t get stale
You can follow a modular framework and begin stacking battle-tested visuals one after another in your creative testing. (e.g. 3 second AI clip 1 → crossfade → 3 second AI clip 2, etc.)
When I created 3D visuals with Blender, it took me several days to complete along with rendering overnight.
To brief content creators, I took an hour to write the creative brief to eliminate most back and forth and share it with our partners.
Each video took me about 15 minutes to generate. Of course, that time increases as the number of steps and frames generated but you see how powerful AI is.
interesting. i always thought that hooks that a rooted in a consumer problem would perform best. generative AI that i've seen so far doesn't seem to do that - but maybe it's just not being used that way just yet.