Performance branding is a marketing strategy that aims to test the impact of potential creative and messaging variations, before they are selected to form part of a company’s brand. With the advent of digital ads, it’s now possible to actually split test brand elements live in the market to see which gets the most clicks. This technique combines performance marketing principles with a deep respect for the value of traditional brand strategy.
Brand vs Performance
Performance marketers regularly test hundreds of combinations of images, videos, and ad copy to find winning combinations of creative and messaging that resonate with the audience. While effective at driving short-term results, brand marketers fear this lack of consistency will dilute the brand message and make it difficult for consumers to recall the brand, harming long-term sales.
Having been in the unique position of building a performance marketing agency (Ladder) incubated by a traditional creative agency (BBH), I was frequently in the middle of this debate. I refined my approach while working with fast-growing startups like Monzo Bank, Booking.com, and Facebook Workplace, as well as with traditional brands like Unilever, Travelex, and Nestle.
Brands that Perform
Coming from an Economics background, I was attracted to the ability of digital-first startups – like King, Uber, and Wish – to test and learn what creative and messaging elements were driving performance. However, working out of the BBH offices that built so many iconic brands – like Levi’s, Audi, and Johnny Walker – made the value of brand advertising hard to ignore. Performance branding was my way of leveraging the power of scientific testing at the tactical level, while honoring the marketing science literature on brand equity at the strategic level.
My solution, ‘performance branding’, was to use in-market testing – actually run ads on Facebook or other key channels – to prove which creative and messaging combinations worked, before incorporating them into the wider brand as distinctive brand assets once they were proven. This testing process allowed us to be data-driven in choosing the distinctive brand assets we later reinforced through brand advertising. It took the risk out of the design and copywriting process, and gave us solid justification for the normally ‘fluffy’ decisions we needed to make in branding.
It’s common for startups, book authors, and mobile games to make changes to products based on what works in testing, as there’s no existing brand at risk. For established brands, testing can be done on a shadow brand – a ‘fake’ brand built for testing purposes – and later incorporated into the master brand when fully validated. We built several such brands, to test what users would click and convert on. Once they got through to purchase we’d display a ‘not available’ message and direct them to a survey to gather more qualitative information.
The Cost / Variant Trade-off
The success of these projects always hinges on the experiment design stage: defining what success looks like, and deciding what creative and messaging variations to prioritize in testing. The difficulty is making the trade-off between the number of variations you want to test and the cost of testing those variations to get statistically significant results. Given the cost and time it takes to run a successful test, I only recommend testing at the wider concept or theme level, rather than smaller variations like button colors or borders.
Achieving statistical significance is a function of the number of observations, the number of variations, and the effect size of the test. More variations or smaller changes between them necessitates more ad spend to generate more observations. If an idea isn’t existentially important to the success of your business, and can easily be reversed later – what Jeff Bezos calls “two-way doors” – it’s not worth testing and should be decided by trusted individuals, or left to the algorithm to optimize. Failing to make this trade-off can incur high cost and lead to insignificant test results, which in my experience diminishes the organization’s willingness to be data-driven.
Performance Branding in the Wild: Tim Ferris, 4-Hour Work Week
When Tim Ferris released his new book, he didn’t leave anything to chance. Over 200,000 new books are published in the US each year, and he wanted his to stand out. As well as printing out copies of the book cover to place in bookstores and observe which got noticed, he also ran a campaign on Google Adwords to decide the title. He didn’t call it “performance branding” (of course, because it’s a term we made up years later), but it’s still a powerful example of how you can test your way to success.
In A/B testing 6 different title options, Ferris ended up with the one that most resonated with his target audience, “The 4-Hour Work Week”. In hindsight this iconic title is obviously the best choice, but at the time it wasn’t the one he would have chosen if going off opinion alone. If it wasn’t for this performance branding experiment it might have been called “Drug Dealing For Fun And Profit”. That title certainly would have garnered attention, but it’s unlikely the book would have been an international best seller!
Performance Branding Example
Say you were an Airbnb host in Miami looking to advertise your listing. Your apartment is 5 minutes away from Miami beach, so the location is a big draw. You’ve also invested in stylish, modern furniture because many of your guests appreciate good design. One element often mentioned in reviews is that guests enjoy that the building is set back from the street and therefore more private than alternative accommodations. You’ve sourced three photos you think will work: a photo of the building, an image of the beach, and an interior shot.
There’s only space for one feature and one photo in each ad, but you can run multiple ad variations to test different combinations. You aren’t sure if copy that mentions “5 mins from the beach” should be paired with a photo of the beach, or if it’s better to show the apartment interior. The outside of the building looks secluded but old fashioned: should you mention the modern stylish interior? You decide to be scientific and A/B test every combination of creative and copy – 9 variations total – to see what combination works best.
You calculate how long to run the experiment with a test duration calculator:
- 2% click-through rate
- 20% minimum detectable effect
- 9 test variations
- 5,000 daily impressions
= 35 days, or 175k impressions
With a $5 CPM (Cost per 1,000 Impressions) you’re looking at $875 to reach statistical significance.
So you run the rest, and tally the results:
The photo of the interior was the clear winner, working well with all copy variations. Mentioning the location near the beach worked well, but not when paired with the photo of the building. Copy that talked about the stylish décor held it’s own, but it fell flat when paired with a photo of it. You decide to go with the Interior photo plus the beach location. We’ve taken what would normally be a gut decision, and found combinations that worked that we may not have otherwise tried.
Of course it doesn’t end there: you can take more photos of the interior, and source more photos of the beach. There are other features that guests might want to know about, like your super-fast Wi-Fi. You can also rewrite some of the copy to be more appealing: maybe your first attempt could be improved upon? Doing the math, you realize that even testing 5 photos x 5 copy routes is 25 variations, which would take over 100 days and cost over $2.5 thousand dollars. You realize how important it is to have a system for deciding what to test.
Performance Branding Template
Whenever a team was struggling to prioritize, I would Google for a test duration calculator to run the numbers and make it clear to them how many variations they could afford. After doing this hundreds of times, I developed a template for quickly making the necessary calculations while incorporating budget limitations. Feel free to use this template in your own organization (or modify it with attribution).
Download your copy here:
> Performance Branding Template
If you’re planning to run your own performance branding experiment, the first step is to come up with the ideas you want to test. I personally use meme mapping – building a swipefile then tagging patterns – to get an understanding of what creative and messaging is being used in the industry and communities I plan to operate in. However you can also use the output of a traditional creative process (stop just before the team consolidates everything into one big idea).
Once you have different creative and messaging ideas, give them names and enter them on the left hand side of the spreadsheet, in columns A and B, which automatically calculates the number of variations (assuming you test every creative against every message). Next you need to input your assumptions for Conversion Rate (events / impressions), and Cost per Conversion, based on your historical performance or benchmarks for your industry.
Estimating the Expected Effect size is tricky, because by definition you don’t know how the test will perform until you run it. I tend to use 20% as standard, because that was the average across ~8,000 experiments we ran at Ladder. However if each of your creative and messaging variations are big conceptual changes, you may assume a larger impact. The result is the sum total you need to spend to run the experiment, in cell F5.
The template also generates the grid of combinations you plan to test, alongside a naming convention you can use when setting things up. Analytics is a case of garbage in, garbage out, and we avoided issues by giving each creative and message its own ID (i.e. C1 = Creative 1, and M1 = Message 1), and putting that into the ad name and UTM parameters of the URL.
The purpose of this template is to do the calculations for what you need to run an A/B/n test, with the native split testing functionality offered by Meta and others in mind. However with hard to measure channels like TV you may need a geo-level experiment, or to build a message mix model – a reformulation of marketing mix modeling – to estimate the impact of larger changes.
Performance Branding Case Study
As is often the case, the first client I had at Ladder was also the most instructive, and when I look back at our work on Money Dashboard, it’s easy to connect the dots to what later became ‘performance branding’. They came to us drowning in data: a consultant was using ‘proprietary software’ to personalize creative for 100s of micro-targeted audiences based on interwoven combinations of demographics, interests and behavioral segments.
Our first step was to consolidate this scatter-gun approach. Throwing spaghetti at the wall to see what sticks isn’t a strategy. Ditching minor variations, we stepped back to the conceptual level and questioned whether stock photos of people buying things was really the right approach. With renewed focus we tested 100s of new creatives across 10 audiences. Multiple combinations of different concepts, formats, taglines, images. Only about 30% of tests would succeed, and there were competitions internally about what creatives would win (which I frequently lost).
Finally we found a new strategy that worked, dubbed internally “financial dilemma illustrations”. The ultimate performer was “Be good with money” which almost wasn’t approved, because the team thought it might be too ‘vague’. It drove 109x more installs at a 65% lower cost per install than would’ve been achieved with the best performing ad on the account when it was taken over.
Over six months of aggressive testing, we noticed something curious: the best ad for one, is the best ad for all. When we found a winner in one of the audiences, e.g. ‘students’, it would also outperform other options when rolled out to different unrelated niches like ‘frequent travelers’ or ‘working professionals’. We split-tested this finding multiple times, because it flew in the face of all the ‘best practices’ that come out of the blogs of self-professed Facebook ‘gurus’.
There was no escaping it: the results were replicated enough times on Money Dashboard, and the other 200+ clients we worked with, across industries. The ‘personalization myth’ sounds good in a presentation, and implemented a sophisticated micro-segmentation strategy might get you promoted, but it’s wrong. This is the type of counter-intuitive insight you just can’t get by following the highest-paid person’s opinion. You can only get there through a robust creative testing process like Performance Branding.
Conclusion
In conclusion, performance branding offers a powerful solution to the age-old debate between brand and performance marketers. By using in-market testing to prove what creative and messaging combinations work best, organizations can incorporate distinctive brand assets that resonate with their target audience while also driving short-term results. However, achieving statistical significance requires careful experiment design and trade-offs between the number of variations tested and the cost of testing. Using a test duration calculator – like the template provided – can help teams prioritize and make data-driven decisions, succeeding in both the short and long term.