Creative Testing Definition
There is nothing more important in marketing than understanding what concepts trigger your target audience to become and stay customers of your product. Creative testing is a collection of methods for experimenting with different image and adcopy combinations maximize brand performance.
Results from creative testing help the creative team to revise concepts, drop ones that do not resonate with the audience, and identify which ones the audience likes best. It can be used for market research before a product launch, or to find better performing variations of an existing brand. Growth is a function of the number of experiments you run.
If you want your marketing to perform, you need to find the combination of factors that resonate with your target audience. As Ogilvy said: âNever stop testing and your advertising will never stop improvingâ. Your adcopy, design and product should all work together â if just one component is off, your campaign wonât work. Find the right combination of memes, and you win.
Scientific Advertising
In 1923 Hopkins published Scientific Advertising, a book that covered techniques like split testing and coupon-based tracking. Ogilvy wrote of the book "Nobody should be allowed to have anything to do with advertising until he has read this book seven timesâ. RCTs (Randomized Controlled Trials) or A/B tests as marketers know them are at the top of the evidence pyramid â the gold standard for proving one thing caused another.
Using scientific principles and methods you can prove what works and gain an edge over the competition. If you donât test new creative or your tests arenât well documented, your organization is at risk of forgetting what worked (or didnât). Winning tactics are accidentally abandoned and failed experiments unknowingly repeated. If early insights werenât statistically valid, some misses will be miscategorized at hits (Type I error), and some hits as misses (Type II error).
To ensure your results are valid, change one variable at a time, carefully setting out your plan in advance. The group exposed should be a large enough for statistical significance, and randomly assigned the treatment or control. Publish openly to build trust in the methodology, and let others build on your work.
A/B Testing vs Machine Learning
Most companies land on their initial winning combo by intuition and luck. Attempting data-driven decision making in the early stages of company-building is a mistake. Thereâs more noise than signal, and youâll suffer from analysis paralysis. Anything thatâs easily changeable or reversible âcan and should be made quickly by high judgment individuals or small groupsâ as Bezos says. Just start trying things, and once you find something that works, double down on that area to scale.
Modern ad platforms optimize creative with AI, but that can mean one ad gets 90% of the budget, leaving us unsure if the rest were given a fair chance. Thereâs no substitute for split testing to learn if one creative performs better than another, but experiments are expensive and slow. So when do we prioritize human learning over machine learning?
Time in your testing schedule should be reserved for the most consequential strategic decisions â that canât be easily reversed â what Bezos calls âone-way doorsâ. What major functions, attributes or features are most important to figure out. Once youâve identified the biggest levers for growth, drill down into themes within each major concept. As you get more granular, small 1-2% differences wonât reach statistical significance, so you can abandon science and let the algorithm decide.
Creative Testing Framework
When testing itâs important to make space for scientific testing, but also cater for algorithmic optimization. You need to protect your âbusiness as usualâ (BAU) campaigns from disruptions from poor performing test variations. However if you donât test new creative your incumbent ads will fatigue and start to decay in terms of performance.
Hereâs the best way Iâve found to navigate this trade-off:
- Generate multiple creative and adcopy concepts
- Calculate how many statistically significant split tests you can run for your Test budget
- Fill up empty slots in Test campaigns with the combinations you believe will work
- Roll out winning combinations in BAU campaigns
- Drill down into successful combinations to generate variations on the theme
- Add variations into BAU campaigns without testing (let the algorithm decide)
- Turn off losing test variations to open up more testing slots
- Repeat the process
Iâve found that this works across channels and industries. It lets you set your relative risk-reward by adjusting your testing budget: 5% for risk-averse brands and as much as 30% if you need a big win.
If a platform doesnât explicitly offer split testing, then you can approximate it by targeting the same audience with multiple campaigns.
BAU campaigns target the same audiences as Test campaigns, so there is overlap, but given Test campaigns are a small percentage of BAU performance shouldnât be impacts by much.