Most ideas for ad campaigns never see the light of day. If an ad campaign actually ran, it’s because multiple people on several teams thought it would work. Therefore most creative analysis is deductive: how did the thing that I tested perform? This type of analysis is a case of garbage in, garbage out. Most savvy marketers make some attempt to put the attributes they really care about in the URL tracking parameters so they can filter and group by them later. Understanding if ‘luxury’ outperforms ‘nightlife’ for your travel startup can be done with a find function and a pivot table, so long as you did the work up front to label the tracking parameters of these ads when you were building the campaign. Even diligent marketers struggle with this, because it’s hard to envision ahead of time what your priorities for analysis will be a few weeks or months out. Once the data is recorded it’s stored forever, so mistakes compound and eventually lead to analysis paralysis.
However the most valuable insights often come from unstructured data — attributes you weren’t testing for up front — because this is frequently what gives you great new ideas for creative hypotheses you can test next. Discovering that ads for pizza made twice as much profit as ads for Chinese food. Noticing that ads for travel booking software showing the office, perform twice as well as those showing an airport. Realizing that all your best performing ads for a fintech app show the bright pink bank card. The patterns that emerge that weren’t on your radar, are often the most important for you to see. To get these insights you need a system for tagging unstructured data like text and images from your ads, to see what patterns emerge. Inductive coding is a process for tagging unique ad creative with consistent labels that can be compared to find these patterns.
Inductive coding works as follows. Start by selecting a small sample (10%) of your campaigns and create codes (labels) that will cover the sample. Don’t obsess around the detail at this early stage: this is a progressive process so you can always go back over your sample later if you find some pattern you want to drill into deeper. Too many categories makes comparison next to impossible, so start as broad as possible (this ad for a hotel shows the ‘beach’, this other one shows the ‘lobby’, this next one shows the ‘bedroom’) and get into more detail later on after your first pass.
Now select a new sample of the same size and apply all the codes you created. Go through each code and review each new ad one by one (take all 20 ads in the sample and see which ones show the ‘beach’). You should feel free to add new codes if you find interesting new broad patterns (this hotel talks about ‘luxury’). When you do, review the existing samples you’ve coded already to find more examples of that label. You should also aim to consolidate where the label is sparse, i.e. not enough examples (only a handful hotels show the ‘lobby’ in their ad, so I’ll group that under ‘other’). Keep repeating, until you’ve coded all your campaigns. You’ll know you’re finished when you make a pass of a sample and find no new interesting patterns. To summarize:
- Select a small sample at random (10%)
- Create broad labels and apply them
- Select the next sample and apply existing labels
- If you notice new patterns, add a new label
- Review all existing samples to apply the new label
- Consolidate if you get a sparse label
- Repeat until no new patterns emerge
This can be a lengthy and manual process, but it’s something that’s uniquely suited to humans. Our brains are meme recognition machines, so we’re far better at transcribing the meaning of an image or text and finding the patterns, than is possible (so far) with machine learning. That said machine learning tagging algorithms are great at suggesting labels that you wouldn’t have thought of, so it can be worth using them as suggested tags, or have some system for pruning the majority of meaningless tags you’ll get from an automated approach. It doesn’t need to be your team doing this: it’s possible to outsource this type of labor to freelance websites or a platform like Mechanical Turk, where workers are already used to doing this type of job at scale for machine learning data.
Completing this analysis gives you an unfair advantage: the best creative performs 10x better, and this increases your odds of finding it. By going to the trouble to systematically code your creative you’ll notice patterns that less diligent people would miss. If your competitors are just going off their gut, they’ll only draw the obvious conclusions and that’s easy to beat. It can actually be enough to simply count the labels and see which ones appear most often, which ones are surprising or interesting. However to supercharge this type of analysis, you can bring in performance data to spot underrated memes: labels that didn’t appear often, but when they did they performed well above average. If you don’t have performance data because you’re coding publicly available data, say from competitors, then find some proxy for performance to stand in its stead. You may not have conversion data in the Facebook ad library, but you know how long an ads have been running, which is at least an indication of their continued success. Results from this analysis should be treated as correlations, not causation. Key insights should be taken as directional, and the system works best if you validate your findings through A/B testing.
Name | Link | Type |
---|---|---|
Coding Qualitative Data: A Beginner’s How-To + Examples | Blog | |
Coding Qualitative Data: How to Code Qualitative Research | Blog | |
Customer feedback analysis: How to analyze and act on feedback | Blog | |
Customer feedback strategy: How to collect, analyze and take action | Blog | |
How Superhuman Built an Engine to Find Product Market Fit | Blog | |
Quick Guide: How To Measure The Accuracy Of Feedback Analysis | Blog |