Technical Guide to Facebook Creative
Creative is the final, most powerful lever advertisers have to differentiate their marketing.
Welcome to the conversation. This is a newly launched discussion on business and product strategy driving parabolic growth. Please take a moment to provide feedback, share your thoughts, or pass along to your friends.
There is no doubt that creative is the ‘Response’ in Direct Response, and when everything is said and one, Creative is the final, most powerful lever advertisers have to differentiate their marketing.
The Bottom Line on Creative (Pt. I)
I’m throwing this out early since I know not all of you have the time to make it to the end. Even the most highly produced, beautifully shot, painstakingly edited, and visually stunning creative on Facebook will fall short of its potential if not implemented properly on Facebook’s platform. Pedigreed creative teams have hit dead ends in their creative testing and endless dollars have been poured into producing creative that falls flat in market. A world class creative program starts with the science, and ends with the art.
Our discussion today will seek to cover the technicals marketers will need to consider as they produce, deploy, and test creative on Facebook. I want to emphasize these common denominators that all advertisers will face with the hope that it creates the foundation for creatives to find success earlier and align their strategy with the realities that their marketing partners are working with. In this discussion we’ll aim to cover:
Creative production, when to produce new creative and how much do we need?
What should we keep in mind when planning and producing new creative?
How should we approach testing, and are clean tests better than simply rotating in new ads?
How do we know when creative is working, what metrics should we monitor, and what do we do with winners?
Creative Fatigues and Creative Expands
In general, there are only two instances where an advertiser absolutely needs to push new creative into the market: when performance is sliding, or when changes to the business (new products, rebranding, new markets, etc.) requires new assets. Let’s explore the how of these two forcing functions.
Creative Fatigue
As marketers, we know intuitively that all creative has a limited shelf life. There is a reason why advertisers tend to rotate creatives seasonally even if the product itself hasn’t changed. While there are underlying dynamics we should consider such as seasonal relevancy (Superbowl Bud Light ads versus holiday Bud Light ads), we’re going to approach this as quantitatively as possible.
For a given audience and for a set of ads, we can reasonably assume that users will have a baseline response rate for each ad. Certainly repeat impressions from the same ecosystem of ads can drive recall and boost response rates in the near term, but there’s no argument that for the same user, the response rate will decay over time. This is the traditional interpretation of creative fatigue.
On top of same user response rate entropy, we can also expect that at the audience level, there will be less and less users in the audience over time remaining to take the action post-ad impression. This leaves us with a smaller eligible audience that with a lower baseline response rate (since all the higher response rate users have already converted), and is a compounding audience saturation dynamic that further accelerates overall creative fatigue.
It’s important to make a distinction here that “new creative” is not simply a new ad. While the auction and ads reporting surfaces generally function at the ad / ad set / campaign level, a user’s response is at the creative level. In order for a user to have a meaningfully different response to an ad impression, it needs to feel like a meaningfully different ad.
Creative Expansion (of audiences)
A common phenomenon for advertisers is a spike in delivery and commonly a bump in performance whenever creatives are refreshed. This initial uptick in spend is driven by the Learning Phase - the ‘random walk’ of spend required for the delivery system to calibrate itself when inputs or ads are changed. Following this learning phase, a lasting performance increase is a direct result of an increase in same user response rate relative to previous creative and/or an increase in delivery of new creative to a new subset of the audience.
The first is what we commonly think of when we “refresh” creative - new ads are visually different, perhaps more seasonally relevant, or they contain new messaging and value for the viewer.
The second is a function of calibration, and can be a consequence of an advertiser’s investment relative to the number of total ads and the size of the target audience. We’ll focus on this dynamic.
Let’s take an advertiser who has $25,000 a day to spend, and for simplicity’s sake we’ll assume this advertiser is running 1 ad and has a very generous average CPM (cost per 1000 impressions) of $5.
$25,000 * (1,000 / $5) = $5M impressions
This advertiser believes in liquidity, and is targeting a broad US 18-65+ M&F audience. Out of approx. 330M total people in the United States, this advertiser on a daily basis will reach a maximum of 1.5% of total eligible users.
Since the delivery system calibrates itself to bias ad delivery against users who are likely to convert or have a high action rate, Conversion Optimized advertised will typically have a frequency greater than 1, resulting in even less audience penetration. Since we have a finite daily budget, and therefore a (much lower) finite amount of Learning Phase spend, the system has to make tradeoffs between exploration and doubling down on delivery against the subset of the audience that has expressed the best response rate. If the advertiser budget results in an absolute impression number that is small relative to the total eligible audience, we can frequently see repeat delivery against the same pockets of the audience from the same ad.
Externally available information about the Learning Phase confirms that any ad change forces Learning Phase again, opening up the door for exploratory spend, and giving the advertiser the opportunity to use the new ad to each a meaningfully different subset of that audience. Combined with a meaningfully (and hopefully more relevant and better produced) ad, we can see the reverse of the audience saturation + creative fatigue effect on conversion rates, resulting in a higher baseline response rate in a highly incremental, new set of converters.
Considerations for New Creative
There are plenty of resources covering how to create the most effective Facebook creative, and we’re not going to discuss that here, but it’s critical for us to have a couple functional considerations in mind when producing new ads.
How Much is Too Much?
When I was at Olapic, then a creative subsidiary of font licensing giant Monotype, I spent most of my time bringing a tech enabled creative platform to market. Our thesis was that the largest advertisers in the world, like Nike, Sephora, and Walmart have a huge creative need stemming from the enormous variety of products and the velocity with which those products were rotated in and out of their media. Our vision was to automate creative iteration and production by integrating it with commerce by way of an advertiser’s product feed, since many of the elements of an ad like Product Name, Product Price, Product Image, etc. are all columns contained within the feed. Algorithms could generate color palettes, typeface, and other visual elements from the product image or product data dynamically, allowing us to quickly scale the production of high quality static and video creative with very little time and labor invested.
This generally flopped from an ad perspective. While this production methodology allowed for advertisers to create huge library of assets that spanned their entire product set, something that was previously impossible, what we failed to consider was
How the user perceives ad creative
How the ad delivery system handles ads
On a), it’s important to consider intuitively that creative must be visually differentiated in order to drive meaningfully different responses from users that see the ad.
Perhaps more importantly, b) carries consequences from the Learning Phase and how spend is distributed across ads.
In general, we know that Learning Phase requires a set number of conversions in order for the system to properly learn and calibrate delivery. This means that volatility in ad performance will stabilize over time and with an increasing number of conversions as the ad exits learning.
Driving a set number of conversions comes with a price tag, and in general we can see that the more ads an advertiser has per ad set, the less impressions each ad will receive. This results in less total conversions attributed to each ad, which either requires an increase in total investment so each ad can exit learning, or a situation where less ads exit learning and have a fair chance at delivery.
The rule of thumb recommendation is 4-5 ads per ad set, but it’s important to have a careful look at total budget available and average cost per conversion when deciding how many creatives to put in market at a times
Format Eligibility
While most ad placements support most images and videos of varying size and length, there are some baseline format requirements creative teams should consider that are referenced in the Ads Guide.
The key takeaway is that advertisers who don’t have a set of creatives that are eligible to be delivered across all placements, they are closing themselves off to delivery.
Creative Testing
One of the most divisive mindset shifts for clients is that creative doesn’t always have to be tested. We can reasonably assume that the auction does a good job of ensuring that some spend is allocated towards new creative, and is reasonably good at ensuring delivery for differentiated creative. Testing also comes with multiple costs, not just the allocated media spend to power the test, but also the cost inefficiencies of forcing spend against different ads and the time it takes to manage the tests.
With that in mind, there are two primary ways of testing creative on Facebook.
Ads Ranking
Sometimes referred to as Auction Selection, Ads Ranking is the function that divides spend amongst ads within an ad set based on Total Value (AKA eCPM or auction competitiveness). Ads Ranking is the most simple way to test, and only requires an advertiser to rotate the challenger ad into an ad set with other incumbent ads. The Learning Phase function guarantees that the ad will receive a minimum amount of spend.
While Ads Ranking is easy to set up, it still requires a minimum amount of conversions (and therefore spend) to exit the Learning Phase, and since the ad set level spend will still be divided amongst the other ads, there is no guarantee that a certain amount of spend will be allocated to the test ad. Ads Ranking also provides directional performance insights, since advertisers can only observe how an ad performs relative to another based on how much spend it receives, and how it actually performs in-market.
When interpreting results using Ads Ranking testing, it’s important to consider the breakdown effect, which explains why your ads with the lowest CPA may not see as much delivery as ads with higher CPA.
A/B Testing
A/B testing or split testing is the cleanest way to test, and has the advantage that it guarantees each ad will receive the same amount of spend, and will be shown to a mutually exclusive audience.
A/B testing comes with the cost that each cell needs to have enough budget allocated in order to get each ad to exit the learning phase, and it exposes your ads to inefficiencies since we are forcing spend through each cell, preventing the delivery system from reallocating budget to ads that may perform better over time. A/B testing may be an unrealistic way for advertisers to test creative when they produce in high volume, or have a large number of campaigns that require creative testing.
It is easier to interpret results from A/B testing, since the in-platform results will show statistical confidence and you can simply select the winner based on attributed performance.
The Bottom Line on Creative (Pt II)
Creative on Facebook is a bit of an art and a bit of a science. Like all things marketing and advertising there are a number of X factors that will go into whether or not a user will respond to an ad. Our goal is to make sure we are controlling for the technical factors that go into how creative is handled by the platform, and what underlying dynamics are impacting creative testing whether we deploy creative directly, lean on Ads Ranking, or set up a clean A/B test.
A solid understanding of these interactions will allow us to feel confident in judging the performance of our ad creative and help us draw reliable insights from creative testing.
Yes! Strong creative variance to really make the algorithms work as hard as possible; not lightly iterative, tweaked versions of same base assets (that can help, sure, but shorter ceiling). So many are worried about "brand schizophrenia" when the reality is users either 1) don't even notice 2) move on quickly 3) even in worst case, can be re-won over by a fresh asset that does speak to them more aptly. "Only as good as your last at-bat" sort of vibe with creative refreshing and user response.