“Software eats the world” is a popular phrase in the tech world. Software tends to automate and replace manual processes. AI will do the same thing to a greater extent because it will be able to teach itself.
As a growth marketer, I believe it is important to learn how to use AI to 100x our output and stay ahead of the curve. Anything else means your career is at risk.
We can start to see the divide in the artist space because of AI. The more pragmatic artists will succeed.
Same goes for growth marketers. Embrace the technology or be tossed to the side.
In this post I’m going to cover the current landscape of generative AI and practical use-cases for Paid Social. We are witnessing an exponential rise in generative AI technology so this post will have to be updated again soon.
Paid Social is video-heavy and the current video outputs are early at the time of writing but promising. The datasets the diffusion models are trained on will get larger and more tailored.
Brands willing to take big bets can take advantage of this technology now while the outputs are novel and eye-catching.
Text
You might have rolled your eyes at another chatGPT post but I am not going to devote much time to write about this section.
You can clearly see in your social media feeds the proliferation of chatGPT in content marketing, ad copy, landing page copy, etc. You can ask chatGPT how to use chatGPT so I don’t have enough value to add here. I am already experiencing chatGPT burnout. All I have to say is if you’re not using it, try it.
Tools like Canva, Notion, and Jasper.ai have text2text prompts as well.
Images
Diffusion models
Dalle 2
Created by OpenAI, the company behind ChatGPT.
Midjourney
Imagen
Stable Diffusion
Created by Stability AI. This diffusion model is open source leading to a lot of adoption in the AI community.
Video
A video is a sequence of images. We can see a proliferation of AI video technology which I’ll walk through briefly. I might go into each one deeper in separate posts if there is enough interest.
Deforum
https://replicate.com/deforum/deforum_stable_diffusion
Disco Diffusion
https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb
img2img
https://huggingface.co/spaces/fffiloni/stable-diffusion-img2img
Neural Radiance Fields (NeRF)
https://blogs.nvidia.com/blog/2022/03/25/instant-nerf-research-3d-ai/
As I was writing this post, I saw this collaboration between McDonald’s and Karen X. Cheng come out. Awesome!
I used the Luma AI app to create this. It took me a few minutes to scan the cup on my phone. The rendering happens on their GPUs and then you are able to create new renders of the scene. You can keyframe and zoom in and out without having to reshoot.
I used Runway ML to delete the background with AI. There is some artifacting but good enough for the time I spent on it. Easy to think of the practicality for e-commerce, event spaces, museums, etc.
Make-A-Video
This one’s created by Facebook Meta. They had a waitlist where you can sign up to test the technology but they aren’t accepting any more people.
“It uses images with descriptions to learn what the world looks like and how it is often described. It also uses unlabeled videos to learn how the world moves.”
With Make-A-Video, you can also:
upload a video for it to generate variations with
upload a static and turn it into a video
upload a pair of static images and it’ll create a video
Imagen Video
https://imagen.research.google/video/
This one’s created by Google as part of their Imagen initiative.
Phenaki
This one’s created by Google as well. Phenaki is able to create two minute long videos with a long sequence of prompts.
It seems like Google is ahead of Meta at the moment with their text2video technology. We will see how this unfolds. It’s telling that the two largest advertising platforms are investing heavily into this field. We can’t play with Imagen, Make-A-Video, and Phenaki at the moment. The authors want to make sure the AI doesn’t generate any harmful or controversial material.
So what now?
Early adopters like GoFundMe and McDonald’s are already taking advantage of this new technology in their advertising. Celebrity deepfakes in ads are controversial and bring up questions about legality and ethics.
It is upper-funnel now, but AI images and videos will trickle down into bottom-funnel advertising. No doubt many advertisers are already dipping their toes. Concepting and iterations are faster with AI.
Imagine using T2I or T2V to generate ad variations to test on Meta. Use ChatGPT to generate several messages to go along with them. Then once the winning concepts bubble to the top, with good ROAS or low CPA, you create iterations with Make-A-Video. Your testing capacity is no longer limited by designer resources but by budget (ad spend + GPU). The brands that can test more will be able to outlast the competition with lower CPA or higher ROAS ads in-market.
Human creativity is still at the heart of this technology. The diffusion models need training data. Data analysis, generation, iteration can all be done by the machine. While prompting and net-new ideas are relegated to humanity.
Are you testing AI creative?
Haven't tested yet but learned a ton of new tools from your post. Thanks Manson 🤙