How A/B Testing Can Add Value to Your Ad Campaign

Two cable cars, one red and one green, traveling in opposite directions in the sky.

Take a look at the image above. Would you rather ride in the blue cable car or the red? Which one catches your attention first? Does one seem faster? Safer? More protected from the sun or wind? This “either/or” line of questioning forms the basis of a common type of marketing experiment: A/B testing. 

What is A/B Testing?

A/B testing allows advertisers to experiment with slightly different versions of a marketing asset in order to determine which version performs better for a given metric. Examples of A/B testing can include:

  • testing different button sizes in an email newsletter to see which attracts the most clicks
  • testing different images in an Instagram ad to see which earns the most likes
  • testing two versions of a landing page with different calls-to-action to see which produces the most leads

The options for A/B testing are almost endless and the results provide marketing agencies and their clients with valuable insights into target audience preferences and behaviors.

Why A/B Test Ad Creative?

Like any well developed experiment, A/B testing begins with a question: “Will version A or version B perform better in my campaign?” The results of A/B tests influence how marketing teams choose to optimize their campaigns in order to achieve the best possible final outcome.

For example, if one ad version significantly outperforms the other for an important KPI, a media planner may advise reallocating advertising dollars towards the more successful ad version, or perhaps even removing the underperforming ad version from the campaign.

In some instances, the results of A/B testing will prompt marketing teams to refresh or replace their creative assets during the course of a campaign. Data gathered through A/B testing can also help steer creative teams as they plan assets for future campaigns. Going forward, they may include more elements or themes that have been proven by A/B testing to be preferred by their target audience.

A/B Testing & Paid Social Algorithms

Social media channels including TikTok, Facebook, and Instagram aim to deliver the right message to the right user at the right time. These platforms rely on algorithms to analyze the performance of both paid and organic content and to amplify the success of the highest performing content.

Because algorithms leverage machine learning to pair users with the content they’re most likely to interact with, they are powerful measurement tools for A/B testing. When a paid social platform detects that a certain type of user is more likely to engage with one version of an ad over another, it prioritizes the more engaging ad, granting it a boost in impression delivery. The algorithm creates a snowball effect whereby the best performing ad versions receive the greatest share of impressions going forward, prompting even more ad engagement as a campaign progresses.

A/B Testing on TikTok

Although popular paid social platforms like TikTok guard the specifics of their algorithms closely, marketers study trends in ad delivery on each platform to learn as much as they can about the factors that the algorithm assesses. Marketers have deduced that the TikTok algorithm, for example, favors videos that prompt high levels of user interaction. When users view, comment on, like, or save a video, this signals to TikTok that it should continue to serve the same video to similar user profiles.

At RMG, we recently conducted an A/B testing experiment on TikTok on behalf of Gordon College, a private Christian college located near Boston, MA. Gordon’s marketing team produced three short video ads, one focused on the campus’ social life, another on the college’s athletics programs, and the third on its academic offerings. Their team was eager to learn which ad message would garner the most engagement from an audience of prospective undergraduate students.

When we launched Gordon’s TikTok campaign and these three video ads went live on the platform, the results quickly began rolling in. Ad A, which focused on social life on Gordon’s campus, saw significantly more clicks and likes than Ads B and C. The graph below shows that ad A earned 106% more clicks and 68% more likes than the next highest performing ad version.

A double bar graph showing that ad A earned more likes and clicks than ads B and C.

Impression delivery for the campaign over an initial six-month period tells a similar story. While Ad A’s impression delivery surpassed the 1 million mark, Ads B and C earned about half as many impressions. The graph below shows that Ad A earned 98% more impressions and 142% more views than the next highest performing ad version. (Dividing the number of views by the number of impressions reveals a 99% view rate for Ad A, indicating that most users watched the video for at least a few seconds.)

A double bar graph showing that ad A earned more impressions and views than ads B or C.

What Can A/B Tests Teach Us?

From this A/B test, Gordon learned that prospective undergraduate TikTok users tend to be more responsive to content pertaining to the social environment of the campus than they are to content focused solely on academic or athletic programming. They also learned that even though the academic and athletics-focused videos saw a smaller share of impressions, they were engaging enough that they still supported the overall campaign and earned impressive view rates (81% for Ad B and 94% for Ad C). Going forward, Gordon’s creative team plans to use these results to inform the creation of new ads that will continue to attract prospective undergraduate students.

A/B Testing Best Practices

For best results, it’s important to bear in mind certain A/B testing best practices. Here are a few of our top A/B testing tips:

  1. In order to take full advantage of the AI behind paid social algorithms, RMG advises our clients and creative partners to produce full sets of ads that fit all available ad formats for a given social platform. For example, if a client is advertising on Meta, we advise them to create assets for Single Image, Carousel, Story, and Reels ads. Full ad sets provide platforms with more options for ad placements and more data to help their algorithms detect patterns that can boost campaign delivery.
  2. Allow A/B tests to run for a long enough time. When launching a new campaign, it may be tempting to check the metrics right away. RMG monitors campaign metrics consistently throughout a campaign to ensure proper delivery, but we advise against drawing conclusions from the data before at least several months have gone by. The longer a campaign is allowed to run and gather data, the more accurate a story the data will tell.
  3. Ideally, test just one element of an ad at a time. Just like a high school science experiment, it’s best to change just one variable when conducting an experiment. If multiple variables differ between two assets, such as font color, background image, and button size, attributing a difference in performance to one variable or another becomes more of a guessing game than a data-driven experiment.

Interested in learning more about our work for Gordon? Check out our Gordon College case study!

Would you like to run your own A/B test? Reach out to RMG today to discuss how we can support your campaign!

Anne Richardson

Anne Richardson is the owner and media director of Richardson Media Group, an agency specializing in media planning and buying, advertising campaign management, and SEO.

Like
Tweet
Share

Comments are closed.

Richardson Media Group logo icon.
Anne outside the door of the Richardson Media Group office.

In addition to her role as owner and media director here at RMG, Anne authors the majority of our blog posts and hosts our BSuite podcast. Favorite topics for both platforms include the entrepreneurial journey, sustainability + social responsibility, media planning, media buying, and forming productive agency partnerships.